[jira] [Created] (KAFKA-15646) Update ReassignPartitionsIntegrationTest once JBOD available

2023-10-19 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-15646:
---

 Summary: Update ReassignPartitionsIntegrationTest once JBOD 
available
 Key: KAFKA-15646
 URL: https://issues.apache.org/jira/browse/KAFKA-15646
 Project: Kafka
  Issue Type: Task
Reporter: Nikolay Izhikov
 Fix For: 3.7.0


Update ReassignPartitionsIntegrationTest once JBOD available



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-15645) Move ReplicationQuotasTestRig to tools

2023-10-19 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-15645:
---

 Summary: Move ReplicationQuotasTestRig to tools
 Key: KAFKA-15645
 URL: https://issues.apache.org/jira/browse/KAFKA-15645
 Project: Kafka
  Issue Type: Task
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


ReplicationQuotasTestRig class used for measuring performance.
Conains dependencies to `ReassignPartitionCommand` API.

To move all commands to tools must move ReplicationQuotasTestRig to tools, also.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-14730) Move AdminOperationException to server-commons

2023-02-17 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-14730:
---

 Summary: Move AdminOperationException to server-commons
 Key: KAFKA-14730
 URL: https://issues.apache.org/jira/browse/KAFKA-14730
 Project: Kafka
  Issue Type: Sub-task
Reporter: Nikolay Izhikov


AdminOperationException used in `core` module and will be used in `tools` 
module in commands like {{DeleteRecordsCommand}} 

Class need to be moved to `server-commons` module



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-865: Support --bootstrap-server in kafka-streams-application-reset

2022-09-13 Thread Nikolay Izhikov
Thanks all!

This KIP has passed with one +1 (non-binding) votes from myself
and
three +1 (binding) votes from Chris Egerton, Bill Bejeck, Guozhang Wang.

вт, 13 сент. 2022 г. в 19:30, Nikolay Izhikov :

> +1 (non-binding)
>
> пн, 12 сент. 2022 г. в 21:16, Chris Egerton :
>
>> +1 (binding). Thanks!
>>
>> On Mon, Sep 12, 2022 at 1:43 PM Bill Bejeck  wrote:
>>
>> > Thanks for the KIP!
>> >
>> > +1(binding)
>> >
>> > -Bill
>> >
>> > On Mon, Sep 12, 2022 at 1:39 PM Николай Ижиков 
>> > wrote:
>> >
>> > > Community, please, share your vote for this KIP.
>> > >
>> > > > 9 сент. 2022 г., в 19:55, Guozhang Wang 
>> > написал(а):
>> > > >
>> > > > +1. Thanks.
>> > > >
>> > > > Guozhang
>> > > >
>> > > > On Fri, Sep 9, 2022 at 9:52 AM Николай Ижиков 
>> > > wrote:
>> > > >
>> > > >> Hello.
>> > > >>
>> > > >> I'd like to start a vote on KIP-865 which adds support of
>> > > >> —bootstrap-server parameter in kafka-streams-application-reset tool
>> > > >>
>> > > >> Discuss Thread:
>> > > >> https://lists.apache.org/thread/5c1plw7mgmzd4zzqh1w59cqopn8kv21c
>> > > >> KIP:
>> > > >>
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset
>> > > >> JIRA: https://issues.apache.org/jira/browse/KAFKA-12878
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > -- Guozhang
>> > >
>> > >
>> >
>>
>


Re: [VOTE] KIP-865: Support --bootstrap-server in kafka-streams-application-reset

2022-09-13 Thread Nikolay Izhikov
+1 (non-binding)

пн, 12 сент. 2022 г. в 21:16, Chris Egerton :

> +1 (binding). Thanks!
>
> On Mon, Sep 12, 2022 at 1:43 PM Bill Bejeck  wrote:
>
> > Thanks for the KIP!
> >
> > +1(binding)
> >
> > -Bill
> >
> > On Mon, Sep 12, 2022 at 1:39 PM Николай Ижиков 
> > wrote:
> >
> > > Community, please, share your vote for this KIP.
> > >
> > > > 9 сент. 2022 г., в 19:55, Guozhang Wang 
> > написал(а):
> > > >
> > > > +1. Thanks.
> > > >
> > > > Guozhang
> > > >
> > > > On Fri, Sep 9, 2022 at 9:52 AM Николай Ижиков 
> > > wrote:
> > > >
> > > >> Hello.
> > > >>
> > > >> I'd like to start a vote on KIP-865 which adds support of
> > > >> —bootstrap-server parameter in kafka-streams-application-reset tool
> > > >>
> > > >> Discuss Thread:
> > > >> https://lists.apache.org/thread/5c1plw7mgmzd4zzqh1w59cqopn8kv21c
> > > >> KIP:
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-865%3A+Support+--bootstrap-server+in+kafka-streams-application-reset
> > > >> JIRA: https://issues.apache.org/jira/browse/KAFKA-12878
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > >
> > >
> >
>


Re: [DISCUSSION] New broker metric. Per partition consumer offset

2022-02-17 Thread Nikolay Izhikov
Hello, just a minor fllow-up

> Client is already collecting consumer-fetch-manager-metrics metrics, can 
> report them to cluster, the broker can feed metrics to subscriptions, and 
> this "just works" without new code in group coordinator.

This approach looks very questionable for me.

Group leader have these data on the hands, already.
Why should we send them over the wire one more time?

> 16 февр. 2022 г., в 22:20, Nikolay Izhikov  
> написал(а):
> 
> Hello, Dylan.
> 
>> At larger scales (e.g., thousands+ of partitions and hundreds+ of consumer 
>> groups) the cardinality of metrics is very high for a broker and very 
>> challenging for a metrics collector to pull out of JMX. 
> 
> Agreed.
> 
> 0. Kafka have `metrics.jmx.exclude`, `metrics.jmx.include` properties to 
> reduce metrics count if it required.
> 1. We should improve JMX exporter or develop a new one if existing can’t 
> expose required,  isn’t it?
> 
>> 16 февр. 2022 г., в 18:47, Meissner, Dylan 
>>  написал(а):
>> 
>> It would be very convenient for consumer applications that are not 
>> collecting and shipping their own metrics to have Kafka cluster doing this 
>> for them.
>> 
>> At larger scales (e.g., thousands+ of partitions and hundreds+ of consumer 
>> groups) the cardinality of metrics is very high for a broker and very 
>> challenging for a metrics collector to pull out of JMX. Consumer groups 
>> specifically often see randomly generated ids which, depending on value of 
>> broker's offsets.retention config, can be represented for days and weeks.
>> 
>> KIP-714 is significant for reporting lag at larger scales and can skip 
>> broker's JMX entirely. Client is already collecting 
>> consumer-fetch-manager-metrics metrics, can report them to cluster, the 
>> broker can feed metrics to subscriptions, and this "just works" without new 
>> code in group coordinator.
>> 
>> 
>> From: Николай Ижиков  on behalf of Nikolay Izhikov 
>> 
>> Sent: Wednesday, February 16, 2022 7:11 AM
>> To: dev@kafka.apache.org 
>> Subject: Re: [DISCUSSION] New broker metric. Per partition consumer offset
>> 
>> Chris, thanks for the support.
>> 
>> Dear Kafka committers, can you, please, advise me:
>> 
>> Are you support my proposal?
>> Can I implement new metrics in the scope of separate KIP?
>> 
>> KIP-714 seems to me much more complex improvement.
>> Moreover, it has similar but slightly different goal.
>> 
>> All I propose is to expose existing offset data as a metrics on broker side.
>> 
>>> 16 февр. 2022 г., в 17:52, Chris Egerton  
>>> написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> Yep, makes sense to me 
>>> 
>>> Sounds like the motivation here is similar to KIP-714 [1], which allows
>>> clients to publish their own metrics directly to a broker. It seems like
>>> one reason this use case isn't already addressed in that KIP is that, if
>>> all you're doing is taking the delta between a consumer group's
>>> latest-committed offsets and the latest stable offsets (LSO) for a set of
>>> topic partitions, none of that requires the consumer to directly publish
>>> metrics to the broker instead of implicitly updating that metric by
>>> committing offsets. In short, as you've noted--that data is already
>>> available on the broker.
>>> 
>>> I think you make a reasonable case and, coupled with the precedent set by
>>> KIP-714 (which, though not yet accepted, seems to have significant traction
>>> at the moment), it'd be nice to see these metrics available broker-side.
>>> 
>>> I do wonder if there's a question about where the line should be drawn for
>>> other client metrics, but will leave that to people more familiar with
>>> broker logic to think through.
>>> 
>>> [1] -
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Motivation
>>> 
>>> Cheers,
>>> 
>>> Chris
>>> 
>>> On Wed, Feb 16, 2022 at 9:23 AM Nikolay Izhikov  wrote:
>>> 
>>>> Hello, Chris.
>>>> 
>>>> Thanks for the feedback.
>>>> 
>>>>> Have you seen the consumer-side lag metrics [1]? "records-lag»,
>>>> 
>>>> Yes, I’m aware of these metrics.
>>>> 
>>>>> If so, I'd be curious to know what the motivation for duplicating
>>>> existing client metrics

Re: [DISCUSSION] New broker metric. Per partition consumer offset

2022-02-16 Thread Nikolay Izhikov
Hello, Dylan.

> At larger scales (e.g., thousands+ of partitions and hundreds+ of consumer 
> groups) the cardinality of metrics is very high for a broker and very 
> challenging for a metrics collector to pull out of JMX. 

Agreed.

0. Kafka have `metrics.jmx.exclude`, `metrics.jmx.include` properties to reduce 
metrics count if it required.
1. We should improve JMX exporter or develop a new one if existing can’t expose 
required,  isn’t it?

> 16 февр. 2022 г., в 18:47, Meissner, Dylan 
>  написал(а):
> 
> It would be very convenient for consumer applications that are not collecting 
> and shipping their own metrics to have Kafka cluster doing this for them.
> 
> At larger scales (e.g., thousands+ of partitions and hundreds+ of consumer 
> groups) the cardinality of metrics is very high for a broker and very 
> challenging for a metrics collector to pull out of JMX. Consumer groups 
> specifically often see randomly generated ids which, depending on value of 
> broker's offsets.retention config, can be represented for days and weeks.
> 
> KIP-714 is significant for reporting lag at larger scales and can skip 
> broker's JMX entirely. Client is already collecting 
> consumer-fetch-manager-metrics metrics, can report them to cluster, the 
> broker can feed metrics to subscriptions, and this "just works" without new 
> code in group coordinator.
> 
> ________
> From: Николай Ижиков  on behalf of Nikolay Izhikov 
> 
> Sent: Wednesday, February 16, 2022 7:11 AM
> To: dev@kafka.apache.org 
> Subject: Re: [DISCUSSION] New broker metric. Per partition consumer offset
> 
> Chris, thanks for the support.
> 
> Dear Kafka committers, can you, please, advise me:
> 
> Are you support my proposal?
> Can I implement new metrics in the scope of separate KIP?
> 
> KIP-714 seems to me much more complex improvement.
> Moreover, it has similar but slightly different goal.
> 
> All I propose is to expose existing offset data as a metrics on broker side.
> 
>> 16 февр. 2022 г., в 17:52, Chris Egerton  
>> написал(а):
>> 
>> Hi Nikolay,
>> 
>> Yep, makes sense to me 
>> 
>> Sounds like the motivation here is similar to KIP-714 [1], which allows
>> clients to publish their own metrics directly to a broker. It seems like
>> one reason this use case isn't already addressed in that KIP is that, if
>> all you're doing is taking the delta between a consumer group's
>> latest-committed offsets and the latest stable offsets (LSO) for a set of
>> topic partitions, none of that requires the consumer to directly publish
>> metrics to the broker instead of implicitly updating that metric by
>> committing offsets. In short, as you've noted--that data is already
>> available on the broker.
>> 
>> I think you make a reasonable case and, coupled with the precedent set by
>> KIP-714 (which, though not yet accepted, seems to have significant traction
>> at the moment), it'd be nice to see these metrics available broker-side.
>> 
>> I do wonder if there's a question about where the line should be drawn for
>> other client metrics, but will leave that to people more familiar with
>> broker logic to think through.
>> 
>> [1] -
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Motivation
>> 
>> Cheers,
>> 
>> Chris
>> 
>> On Wed, Feb 16, 2022 at 9:23 AM Nikolay Izhikov  wrote:
>> 
>>> Hello, Chris.
>>> 
>>> Thanks for the feedback.
>>> 
>>>> Have you seen the consumer-side lag metrics [1]? "records-lag»,
>>> 
>>> Yes, I’m aware of these metrics.
>>> 
>>>> If so, I'd be curious to know what the motivation for duplicating
>>> existing client metrics onto brokers would be?
>>> 
>>> It can be a complex task to setup and access monitoring data for all
>>> Consumers.
>>> Clients can be new, experimental and not integrated into company
>>> monitoring solution.
>>> Instances can come and go;
>>> Clients can change addresses, etc. based on some circumstances not related
>>> to Kafka.
>>> 
>>> I think it will be useful to have per partition consumer offset metrics on
>>> broker side.
>>> It allows to Kafka administrator collect monitoring data in one place.
>>> 
>>> Moreover, these data available on the broker, already.
>>> All we need is to expose them.
>>> 
>>> Makes sense for you?
>>> 
>>>> 16 февр. 2022 г., в 17:01, Chris Egerton 
>>> нап

Re: [DISCUSSION] New broker metric. Per partition consumer offset

2022-02-16 Thread Nikolay Izhikov
Chris, thanks for the support.

Dear Kafka committers, can you, please, advise me:

Are you support my proposal?
Can I implement new metrics in the scope of separate KIP?

KIP-714 seems to me much more complex improvement.
Moreover, it has similar but slightly different goal.

All I propose is to expose existing offset data as a metrics on broker side.

> 16 февр. 2022 г., в 17:52, Chris Egerton  
> написал(а):
> 
> Hi Nikolay,
> 
> Yep, makes sense to me 
> 
> Sounds like the motivation here is similar to KIP-714 [1], which allows
> clients to publish their own metrics directly to a broker. It seems like
> one reason this use case isn't already addressed in that KIP is that, if
> all you're doing is taking the delta between a consumer group's
> latest-committed offsets and the latest stable offsets (LSO) for a set of
> topic partitions, none of that requires the consumer to directly publish
> metrics to the broker instead of implicitly updating that metric by
> committing offsets. In short, as you've noted--that data is already
> available on the broker.
> 
> I think you make a reasonable case and, coupled with the precedent set by
> KIP-714 (which, though not yet accepted, seems to have significant traction
> at the moment), it'd be nice to see these metrics available broker-side.
> 
> I do wonder if there's a question about where the line should be drawn for
> other client metrics, but will leave that to people more familiar with
> broker logic to think through.
> 
> [1] -
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability#KIP714:Clientmetricsandobservability-Motivation
> 
> Cheers,
> 
> Chris
> 
> On Wed, Feb 16, 2022 at 9:23 AM Nikolay Izhikov  wrote:
> 
>> Hello, Chris.
>> 
>> Thanks for the feedback.
>> 
>>> Have you seen the consumer-side lag metrics [1]? "records-lag»,
>> 
>> Yes, I’m aware of these metrics.
>> 
>>> If so, I'd be curious to know what the motivation for duplicating
>> existing client metrics onto brokers would be?
>> 
>> It can be a complex task to setup and access monitoring data for all
>> Consumers.
>> Clients can be new, experimental and not integrated into company
>> monitoring solution.
>> Instances can come and go;
>> Clients can change addresses, etc. based on some circumstances not related
>> to Kafka.
>> 
>> I think it will be useful to have per partition consumer offset metrics on
>> broker side.
>> It allows to Kafka administrator collect monitoring data in one place.
>> 
>> Moreover, these data available on the broker, already.
>> All we need is to expose them.
>> 
>> Makes sense for you?
>> 
>>> 16 февр. 2022 г., в 17:01, Chris Egerton 
>> написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> Have you seen the consumer-side lag metrics [1]? "records-lag",
>>> "records-lag-avg", "records-lag-max" all give lag stats on a
>>> per-topic-partition basis.
>>> 
>>> If so, I'd be curious to know what the motivation for duplicating
>> existing
>>> client metrics onto brokers would be?
>>> 
>>> [1]
>> https://kafka.apache.org/31/documentation.html#consumer_fetch_monitoring
>>> 
>>> Cheers,
>>> 
>>> Chris
>>> 
>>> On Wed, Feb 16, 2022 at 4:38 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Hello, Kafka team.
>>>> 
>>>> When running in production the common user question is «How big lag
>>>> between producer and consumer?».
>>>> We have a `kafka-consumer-groups.sh` tool and
>>>> `AdminClient#getListConsumerGroupOffsetsCall` to answer the question.
>>>> 
>>>> Even detailed guides on how to calculate *consumer lag* with built-in
>>>> Kafka tools, exists. [1]
>>>> 
>>>> Obviously, approach with tool or AdminClient requires additional coding
>>>> and setup which can be inconvenient.
>>>> 
>>>> I think Kafka should provide per partition consumer offset metric.
>>>> It will simplify running Kafka deployment and monitoring in production.
>>>> 
>>>> Looked in `GroupMetadataManager.scala` and think it possible to add
>> those
>>>> metrics.
>>>> 
>>>> What do you think?
>>>> Do we need this metrics on Kafka broker?
>>>> 
>>>> [1] https://www.baeldung.com/java-kafka-consumer-lag
>> 
>> 



Re: [DISCUSSION] New broker metric. Per partition consumer offset

2022-02-16 Thread Nikolay Izhikov
Hello, Chris.

Thanks for the feedback.

> Have you seen the consumer-side lag metrics [1]? "records-lag»,

Yes, I’m aware of these metrics.

> If so, I'd be curious to know what the motivation for duplicating existing 
> client metrics onto brokers would be?

It can be a complex task to setup and access monitoring data for all Consumers.
Clients can be new, experimental and not integrated into company monitoring 
solution.
Instances can come and go; 
Clients can change addresses, etc. based on some circumstances not related to 
Kafka.

I think it will be useful to have per partition consumer offset metrics on 
broker side.
It allows to Kafka administrator collect monitoring data in one place.

Moreover, these data available on the broker, already.
All we need is to expose them.

Makes sense for you?

> 16 февр. 2022 г., в 17:01, Chris Egerton  
> написал(а):
> 
> Hi Nikolay,
> 
> Have you seen the consumer-side lag metrics [1]? "records-lag",
> "records-lag-avg", "records-lag-max" all give lag stats on a
> per-topic-partition basis.
> 
> If so, I'd be curious to know what the motivation for duplicating existing
> client metrics onto brokers would be?
> 
> [1] https://kafka.apache.org/31/documentation.html#consumer_fetch_monitoring
> 
> Cheers,
> 
> Chris
> 
> On Wed, Feb 16, 2022 at 4:38 AM Nikolay Izhikov  wrote:
> 
>> Hello, Kafka team.
>> 
>> When running in production the common user question is «How big lag
>> between producer and consumer?».
>> We have a `kafka-consumer-groups.sh` tool and
>> `AdminClient#getListConsumerGroupOffsetsCall` to answer the question.
>> 
>> Even detailed guides on how to calculate *consumer lag* with built-in
>> Kafka tools, exists. [1]
>> 
>> Obviously, approach with tool or AdminClient requires additional coding
>> and setup which can be inconvenient.
>> 
>> I think Kafka should provide per partition consumer offset metric.
>> It will simplify running Kafka deployment and monitoring in production.
>> 
>> Looked in `GroupMetadataManager.scala` and think it possible to add those
>> metrics.
>> 
>> What do you think?
>> Do we need this metrics on Kafka broker?
>> 
>> [1] https://www.baeldung.com/java-kafka-consumer-lag



[DISCUSSION] New broker metric. Per partition consumer offset

2022-02-16 Thread Nikolay Izhikov
Hello, Kafka team.

When running in production the common user question is «How big lag between 
producer and consumer?».
We have a `kafka-consumer-groups.sh` tool and 
`AdminClient#getListConsumerGroupOffsetsCall` to answer the question.

Even detailed guides on how to calculate *consumer lag* with built-in Kafka 
tools, exists. [1]

Obviously, approach with tool or AdminClient requires additional coding and 
setup which can be inconvenient.

I think Kafka should provide per partition consumer offset metric.
It will simplify running Kafka deployment and monitoring in production.

Looked in `GroupMetadataManager.scala` and think it possible to add those 
metrics.

What do you think?
Do we need this metrics on Kafka broker?

[1] https://www.baeldung.com/java-kafka-consumer-lag

Re: [DISCUSS] Should we automatically close stale PRs?

2022-02-09 Thread Nikolay Izhikov
> Thanks for that list, Nikolay,

Thank, John.

I made a second round of digging through abandoned PR’s.
Next pack, that should be closed:

https://github.com/apache/kafka/pull/1291
https://github.com/apache/kafka/pull/1323
https://github.com/apache/kafka/pull/1412
https://github.com/apache/kafka/pull/1757
https://github.com/apache/kafka/pull/1741
https://github.com/apache/kafka/pull/1715
https://github.com/apache/kafka/pull/1668
https://github.com/apache/kafka/pull/1666
https://github.com/apache/kafka/pull/1661
https://github.com/apache/kafka/pull/1626
https://github.com/apache/kafka/pull/1624
https://github.com/apache/kafka/pull/1608
https://github.com/apache/kafka/pull/1606
https://github.com/apache/kafka/pull/1582
https://github.com/apache/kafka/pull/1522
https://github.com/apache/kafka/pull/1516
https://github.com/apache/kafka/pull/1493
https://github.com/apache/kafka/pull/1473
https://github.com/apache/kafka/pull/1870
https://github.com/apache/kafka/pull/1883
https://github.com/apache/kafka/pull/1893
https://github.com/apache/kafka/pull/1894
https://github.com/apache/kafka/pull/1912
https://github.com/apache/kafka/pull/1933
https://github.com/apache/kafka/pull/1983
https://github.com/apache/kafka/pull/1984
https://github.com/apache/kafka/pull/2017
https://github.com/apache/kafka/pull/2018

> 9 февр. 2022 г., в 22:37, John Roesler  написал(а):
> 
> Thanks for that list, Nikolay,
> 
> I've just closed them all.
> 
> And thanks to you all for working to keep Kafka development
> healthy!
> 
> -John
> 
> On Wed, 2022-02-09 at 14:19 +0300, Nikolay Izhikov wrote:
>> Hello, guys.
>> 
>> I made a quick search throw oldest PRs.
>> Looks like the following list can be safely closed.
>> 
>> Committers, can you, please, push the actual «close» button for this list of 
>> PRs?
>> 
>> https://github.com/apache/kafka/pull/560
>> https://github.com/apache/kafka/pull/200
>> https://github.com/apache/kafka/pull/62
>> https://github.com/apache/kafka/pull/719
>> https://github.com/apache/kafka/pull/735
>> https://github.com/apache/kafka/pull/757
>> https://github.com/apache/kafka/pull/824
>> https://github.com/apache/kafka/pull/880
>> https://github.com/apache/kafka/pull/907
>> https://github.com/apache/kafka/pull/983
>> https://github.com/apache/kafka/pull/1035
>> https://github.com/apache/kafka/pull/1078
>> https://github.com/apache/kafka/pull/
>> https://github.com/apache/kafka/pull/1135
>> https://github.com/apache/kafka/pull/1147
>> https://github.com/apache/kafka/pull/1150
>> https://github.com/apache/kafka/pull/1244
>> https://github.com/apache/kafka/pull/1269
>> https://github.com/apache/kafka/pull/1415
>> https://github.com/apache/kafka/pull/1468
>> 
>>> 7 февр. 2022 г., в 20:04, Mickael Maison  
>>> написал(а):
>>> 
>>> Hi David,
>>> 
>>> I agree with you, I think we should close stale PRs.
>>> 
>>> Overall, I think we should also see if there are other Github actions
>>> that may ease the work for reviewers and/or give more visibility of
>>> the process to PR authors.
>>> I'm thinking things like:
>>> - code coverage changes
>>> - better view on results from the build, for example if it's failing
>>> checkstyle, the author could be notified first
>>> - check whether public API are touched and it requires a KIP
>>> 
>>> For example, see some actions/integration used by other Apache projects:
>>> - Flink: https://github.com/apache/flink/pull/18638#issuecomment-1030709579
>>> - Beam: https://github.com/apache/beam/pull/16746#issue-1124656975
>>> - Pinot: https://github.com/apache/pinot/pull/8139#issuecomment-1030701265
>>> 
>>> Finally, as several people have mentioned already, what can we do to
>>> increase the impact of contributors that are not (yet?) committers?
>>> Currently, our long delays in reviewing PRs and KIPs is hurting the
>>> project and we're for sure missing out some fixes and potential
>>> contributors. I think Josep's idea is interesting and finding ways to
>>> engage more people and share some responsibilities better will improve
>>> the project. Currently the investment to become a committer is pretty
>>> high. This could provide a stepping stone (or an intermediary role)
>>> for some people in the community.
>>> 
>>> Thanks,
>>> Mickael
>>> 
>>> 
>>> On Mon, Feb 7, 2022 at 12:51 PM Josep Prat  
>>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> It seems like a great idea. I agree with you that we should use this

Re: [DISCUSS] Should we automatically close stale PRs?

2022-02-09 Thread Nikolay Izhikov
Hello, guys.

I made a quick search throw oldest PRs.
Looks like the following list can be safely closed.

Committers, can you, please, push the actual «close» button for this list of 
PRs?

https://github.com/apache/kafka/pull/560
https://github.com/apache/kafka/pull/200
https://github.com/apache/kafka/pull/62
https://github.com/apache/kafka/pull/719
https://github.com/apache/kafka/pull/735
https://github.com/apache/kafka/pull/757
https://github.com/apache/kafka/pull/824
https://github.com/apache/kafka/pull/880
https://github.com/apache/kafka/pull/907
https://github.com/apache/kafka/pull/983
https://github.com/apache/kafka/pull/1035
https://github.com/apache/kafka/pull/1078
https://github.com/apache/kafka/pull/
https://github.com/apache/kafka/pull/1135
https://github.com/apache/kafka/pull/1147
https://github.com/apache/kafka/pull/1150
https://github.com/apache/kafka/pull/1244
https://github.com/apache/kafka/pull/1269
https://github.com/apache/kafka/pull/1415
https://github.com/apache/kafka/pull/1468

> 7 февр. 2022 г., в 20:04, Mickael Maison  
> написал(а):
> 
> Hi David,
> 
> I agree with you, I think we should close stale PRs.
> 
> Overall, I think we should also see if there are other Github actions
> that may ease the work for reviewers and/or give more visibility of
> the process to PR authors.
> I'm thinking things like:
> - code coverage changes
> - better view on results from the build, for example if it's failing
> checkstyle, the author could be notified first
> - check whether public API are touched and it requires a KIP
> 
> For example, see some actions/integration used by other Apache projects:
> - Flink: https://github.com/apache/flink/pull/18638#issuecomment-1030709579
> - Beam: https://github.com/apache/beam/pull/16746#issue-1124656975
> - Pinot: https://github.com/apache/pinot/pull/8139#issuecomment-1030701265
> 
> Finally, as several people have mentioned already, what can we do to
> increase the impact of contributors that are not (yet?) committers?
> Currently, our long delays in reviewing PRs and KIPs is hurting the
> project and we're for sure missing out some fixes and potential
> contributors. I think Josep's idea is interesting and finding ways to
> engage more people and share some responsibilities better will improve
> the project. Currently the investment to become a committer is pretty
> high. This could provide a stepping stone (or an intermediary role)
> for some people in the community.
> 
> Thanks,
> Mickael
> 
> 
> On Mon, Feb 7, 2022 at 12:51 PM Josep Prat  
> wrote:
>> 
>> Hi,
>> 
>> It seems like a great idea. I agree with you that we should use this as a
>> means to notify contributors and reviewers that there is some work to be
>> done.
>> 
>> Regarding labels, a couple of things, first one is that PR participants
>> won't get notified when a label is applied. So probably it would be best to
>> apply a label and add a comment.
>> Secondly, GitHub offers better fine-grained roles for contributors: read,
>> triage, write, maintain, admin (further reading here:
>> https://docs.github.com/en/organizations/managing-access-to-your-organizations-repositories/repository-roles-for-an-organization#permissions-for-each-role).
>> One thing that might make sense to do maybe is to add frequent contributors
>> with the "triage" role, so they could label PRs they reviewed and they can
>> be taken by committers for a further review and potential merge. What do
>> you think?
>> 
>> Best,
>> 
>> On Mon, Feb 7, 2022 at 12:16 PM Nikolay Izhikov  wrote:
>> 
>>>> We do not have a separate list of PRs that need pre-reviews.
>>> 
>>> Ok.
>>> What should I do if PR need to be to closed found?
>>> Who can I tag to do actual close?
>>> 
>>>> 7 февр. 2022 г., в 13:53, Bruno Cadonna  написал(а):
>>>> 
>>>> Hi,
>>>> 
>>>> Thank you David for bringing this up!
>>>> 
>>>> I am in favour of automatically closing stale PRs. I agree with Guozhang
>>> that notifications of staleness to authors would be better than silently
>>> closing them. I assume the notification happens automatically when the
>>> label "Stale" is added to the PR.
>>>> 
>>>> +1 for Matthias' proposal of non-committers doing a pre-review. That
>>> would definitely save some time for committer reviews.
>>>> 
>>>> Nikolay, great that you are willing to do reviews. We do not have a
>>> separate list of PRs that need pre-reviews. You can consult the list of PRs
>>> of Apache Kafka (https://github.com/apache/kafka/pulls) and choose f

Re: [DISCUSS] Should we automatically close stale PRs?

2022-02-07 Thread Nikolay Izhikov
> We do not have a separate list of PRs that need pre-reviews.

Ok. 
What should I do if PR need to be to closed found?
Who can I tag to do actual close?

> 7 февр. 2022 г., в 13:53, Bruno Cadonna  написал(а):
> 
> Hi,
> 
> Thank you David for bringing this up!
> 
> I am in favour of automatically closing stale PRs. I agree with Guozhang that 
> notifications of staleness to authors would be better than silently closing 
> them. I assume the notification happens automatically when the label "Stale" 
> is added to the PR.
> 
> +1 for Matthias' proposal of non-committers doing a pre-review. That would 
> definitely save some time for committer reviews.
> 
> Nikolay, great that you are willing to do reviews. We do not have a separate 
> list of PRs that need pre-reviews. You can consult the list of PRs of Apache 
> Kafka (https://github.com/apache/kafka/pulls) and choose from there. I think 
> that is the simplest way to start reviewing. Maybe Luke has some tips here 
> since he does an excellent job in reviewing as a non-committer.
> 
> Best,
> Bruno
> 
> On 07.02.22 08:24, Nikolay Izhikov wrote:
>> Hello, Matthias, Luke.
>>> I agree with Matthias that contributors could and should help do more 
>>> "pre-review" PRs.
>> I, personally, ready to do the initial review of PRs. Do we have some recipe 
>> to filter PRs that has potential to land in trunk?
>> Can, you, please, send me list of PRs that need to be pre-reviewed?
>>> I might be useful thought to just do a better job to update KIP status more 
>>> frequently
>> First, I thought it’s an author job to keep KIP status up to date.
>> But, it can be tricky to determine actual KIP status because of lack of 
>> feedback from committers :)
>> Second - the other issue is determine - what KIP just wait for a hero to 
>> implement it, and what just wrong idea or something similar.
>> All of this kind of KIPs has status «Under discussion».
>> Actually, if someone has a list of potentially useful KIPS - please, send it.
>> I’m ready to work on one of those.
>>> 7 февр. 2022 г., в 05:28, Luke Chen  написал(а):
>>> 
>>> I agree with Matthias that contributors could and should help do more
>>> "pre-review" PRs.
>>> Otherwise, we're not fixing the root cause of the issue, and still keeping
>>> piling up the PRs (and auto closing them after stale)
>>> 
>>> And I also agree with Guozhang that we should try to notify at least the
>>> committers about the closed PRs (maybe PR participants + committers if
>>> possible).
>>> Although the PRs are stale, there might be some good PRs just got ignored.
>>> 
>>> Thank you.
>>> Luke
>>> 
>>> 
>>> On Mon, Feb 7, 2022 at 6:50 AM Guozhang Wang  wrote:
>>> 
>>>> Thanks for bringing this up David. I'm in favor of some automatic ways to
>>>> clean up stale PRs. More specifically:
>>>> 
>>>> * I think there are indeed many root causes why we have so many stale PRs
>>>> that we should consider, and admittedly the reviewing manpower cannot keep
>>>> up with the contributing pace is a big one of them. But in this discussion
>>>> I'd personally like to keep this out of the scope and maybe keep it as a
>>>> separate discussion (I think we are having some discussions on some of
>>>> these root causes in parallel at the moment).
>>>> 
>>>> * As for just how to handle the existing stale PRs, I think having an
>>>> automatic way would be possibly the most effective manner, as I suspect how
>>>> maintainable it would be to do that manually. The question though would be:
>>>> do we just automatically close those PRs silently or should we also send
>>>> notifications along with it. It seems https://github.com/actions/stale can
>>>> definitely do the first, but not sure if it could the second? Plus let's
>>>> say if we want notifications and it's doable via Action, could we configure
>>>> just the committers list (as sending notifications to all community
>>>> subscribers may be too spammy)? Personally I feel setting 6 months for
>>>> closing and notifying committers on a per-week basis seems sufficient.
>>>> 
>>>> 
>>>> Guozhang
>>>> 
>>>> 
>>>> On Sun, Feb 6, 2022 at 9:58 AM Matthias J. Sax  wrote:
>>>> 
>>>>> I am +1 to close stale PRs -- not sure to what extend we want to
>>>>> automate it, or just leave it up

Re: [DISCUSS] Should we automatically close stale PRs?

2022-02-06 Thread Nikolay Izhikov
;> I find this much more digestible compared to the main KIP page.
>>> 
>>> Might also be good to have a sub-page for Connect KIPs?
>>> 
>>> 
>>> -Matthias
>>> 
>>> 
>>> On 2/5/22 05:57, Luke Chen wrote:
>>>> Hi Nikolay,
>>>> 
>>>> That's a good question!
>>>> But I think for stale KIP, we should have another discussion thread.
>>>> 
>>>> In my opinion, I agree we should also have similar mechanism for KIP.
>>>> Currently, the state of KIP has "under discussion", "voting", and
>>>> "accepted".
>>>> The KIP might stay in "discussion" or "voting" state forever.
>>>> We might be able to have a new state called "close" for KIP.
>>>> And we can review those inactive KIPs for a long time like PR did, to
>> see
>>>> if these KIPs need to close or re-start the discussion again.
>>>> 
>>>> Thank you.
>>>> Luke
>>>> 
>>>> On Sat, Feb 5, 2022 at 9:23 PM Nikolay Izhikov 
>>> wrote:
>>>> 
>>>>> Hello, David, Luke.
>>>>> 
>>>>> What about KIPs?
>>>>> Should we have some special state on KIPs that was rejected or can’t
>> be
>>>>> implemented due to lack of design or when Kafka goes in another
>>> direction?
>>>>> Right now those kind of KIPs just have no feedback.
>>>>> For me as a contributor it’s not clear - what is wrong with the KIP.
>>>>> 
>>>>> Is it wrong? Is there are no contributor to do the implementation?
>>>>> 
>>>>>> 5 февр. 2022 г., в 15:49, Luke Chen  написал(а):
>>>>>> 
>>>>>> Hi David,
>>>>>> 
>>>>>> I agree with it! This is also a good way to let both parties (code
>>> author
>>>>>> and reviewers) know there's a PR is not active anymore. Should we
>>>>> continue
>>>>>> it or close it directly?
>>>>>> 
>>>>>> In my opinion, 1 year is too long, half a year should be long enough.
>>>>>> 
>>>>>> Thank you.
>>>>>> Luke
>>>>>> 
>>>>>> On Sat, Feb 5, 2022 at 8:17 PM Sagar 
>>> wrote:
>>>>>> 
>>>>>>> Hey David,
>>>>>>> 
>>>>>>> That's a great idea.. Just to stress your point, this keeps both
>>> parties
>>>>>>> informed if a PR has become stale. So, the reviewer would know that
>>>>> there
>>>>>>> was some PR which was being reviewed but due to inactivity it got
>>>>> closed so
>>>>>>> maybe time to relook and similarly the submitter.
>>>>>>> 
>>>>>>> And yeah, any stale/unused PRs can be closed straight away thereby
>>>>> reducing
>>>>>>> the load on reviewers. I have done some work on kubernetes open
>> source
>>>>> and
>>>>>>> they follow a similar paradigm which is useful.
>>>>>>> 
>>>>>>> Thanks!
>>>>>>> Sagar.
>>>>>>> 
>>>>> 
>>>>> 
>>>> 
>>> 
>> 
>> 
>> --
>> -- Guozhang
>> 



Re: [DISCUSS] Should we automatically close stale PRs?

2022-02-05 Thread Nikolay Izhikov
Hello, David, Luke.

What about KIPs?
Should we have some special state on KIPs that was rejected or can’t be 
implemented due to lack of design or when Kafka goes in another direction?
Right now those kind of KIPs just have no feedback.
For me as a contributor it’s not clear - what is wrong with the KIP.

Is it wrong? Is there are no contributor to do the implementation?

> 5 февр. 2022 г., в 15:49, Luke Chen  написал(а):
> 
> Hi David,
> 
> I agree with it! This is also a good way to let both parties (code author
> and reviewers) know there's a PR is not active anymore. Should we continue
> it or close it directly?
> 
> In my opinion, 1 year is too long, half a year should be long enough.
> 
> Thank you.
> Luke
> 
> On Sat, Feb 5, 2022 at 8:17 PM Sagar  wrote:
> 
>> Hey David,
>> 
>> That's a great idea.. Just to stress your point, this keeps both parties
>> informed if a PR has become stale. So, the reviewer would know that there
>> was some PR which was being reviewed but due to inactivity it got closed so
>> maybe time to relook and similarly the submitter.
>> 
>> And yeah, any stale/unused PRs can be closed straight away thereby reducing
>> the load on reviewers. I have done some work on kubernetes open source and
>> they follow a similar paradigm which is useful.
>> 
>> Thanks!
>> Sagar.
>> 



Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-12-02 Thread Nikolay Izhikov
Dear Kafka commiters.

Let’s have this API in Kafka!

> 2 дек. 2021 г., в 17:19, Christopher Shannon 
>  написал(а):
> 
> Revisiting this as this has come up for my use case again. Specifically for
> validation I need to be able to validate headers including compressed
> messages. It looks like in LogValidator the messages are already
> decompressed to validate records but the headers get skipped when loaded
> into a partial record. So as part of this change I would think there should
> be a way to read in the headers for validation even if records are
> compressed.
> 
> On Wed, Jul 7, 2021 at 3:30 AM Nikolay Izhikov  wrote:
> 
>> Hello, James.
>> 
>>> One use case we would like is to require that producers are sending
>> compressed messages.
>> 
>> I think that forcing producers to send compressed messages is out of scope
>> of this KIP.
>> 
>> 
>>> 7 июля 2021 г., в 08:48, Soumyajit Sahu 
>> написал(а):
>>> 
>>> Interesting point. You are correct that at least KIP-729 cannot validate
>>> that.
>>> 
>>> We could propose a different KIP for that which could enforce that in the
>>> upper layer. Personally, I would be hesitant to discard the data in that
>>> case, but just use metrics/logs to detect those and inform the producers
>>> about it.
>>> 
>>> 
>>> On Tue, Jul 6, 2021, 9:13 PM James Cheng  wrote:
>>> 
>>>> One use case we would like is to require that producers are sending
>>>> compressed messages. Would this KIP (or KIP-686) allow the broker to
>> detect
>>>> that? From looking at both KIPs, it doesn't look it would help with my
>>>> particular use case. Both of the KIPs are at the Record-level.
>>>> 
>>>> Thanks,
>>>> -James
>>>> 
>>>>> On Jun 30, 2021, at 10:05 AM, Soumyajit Sahu >> 
>>>> wrote:
>>>>> 
>>>>> Hi Nikolay,
>>>>> Great to hear that. I'm ok with either one too.
>>>>> I had missed noticing the KIP-686. Thanks for bringing it up.
>>>>> 
>>>>> I have tried to keep this one simple, but hope it can cover all our
>>>>> enterprise needs.
>>>>> 
>>>>> Should we put this one for vote?
>>>>> 
>>>>> Regards,
>>>>> Soumyajit
>>>>> 
>>>>> 
>>>>> On Wed, Jun 30, 2021, 8:50 AM Nikolay Izhikov 
>>>> wrote:
>>>>> 
>>>>>> Team, If we have support from committers for API to check records on
>> the
>>>>>> broker side let’s choose one KIP to go with and move forward to vote
>> and
>>>>>> implementation?
>>>>>> I’m ready to drive implementation of this API.
>>>>>> 
>>>>>> I’m ready to drive the implementation of this API.
>>>>>> It seems very useful to me.
>>>>>> 
>>>>>>> 30 июня 2021 г., в 18:04, Nikolay Izhikov 
>>>>>> написал(а):
>>>>>>> 
>>>>>>> Hello.
>>>>>>> 
>>>>>>> I had a very similar proposal [1].
>>>>>>> So, yes, I think we should have one implementation of API in the
>>>> product.
>>>>>>> 
>>>>>>> [1]
>>>>>> 
>>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
>>>>>>> 
>>>>>>>> 30 июня 2021 г., в 17:57, Christopher Shannon <
>>>>>> christopher.l.shan...@gmail.com> написал(а):
>>>>>>>> 
>>>>>>>> I would find this feature very useful as well as adding custom
>>>>>> validation
>>>>>>>> to incoming records would be nice to prevent bad data from making it
>>>> to
>>>>>> the
>>>>>>>> topic.
>>>>>>>> 
>>>>>>>> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu <
>>>> soumyajit.s...@gmail.com
>>>>>>> 
>>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Thanks Colin! Good call on the ApiRecordError. We could use
>>>>>>>>> InvalidRecordException instead, and have the broker convert it
>>>>>>>>> to ApiRecordError.
>>>>>>>>> Modified signatu

Re: KIP Process Needs Improvement

2021-10-19 Thread Nikolay Izhikov
Hello, Knowles.

I have some frustration with KIP process, also.
Some features, that at glance requested by the community just can’t be done 
without explicit commiters approval.

KIP-729 [1], KIP-686 [2], request the same feature, can be taken as a good 
example.

Dear Kafka committers, what kind of help do you need with abandoned KIPs?
How can I help to reduce number of lost proposals?

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
[2] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker




> 19 окт. 2021 г., в 17:42, John Roesler  написал(а):
> 
> Good morning Knowles,
> 
> Thank you for sharing this feedback. I think your frustration is well 
> founded. I think most/all of the active committers carry around a sense of 
> guilt about KIPs that we haven’t been able to review.
> 
> We do have a responsibility to hold auditable, public design discussions 
> before adopting any new features. As a volunteer organization, we are also at 
> the mercy of committers’ capacity outside of work and personal life.
> 
> The main thing we do to try and stay responsive is to continue to add new 
> folks to the committer roster in the hopes that with more committers, we have 
> a better chance that someone will be able to volunteer to review each new KIP.
> 
> Unfortunately, aside from that, we have only this loose system where 
> committers try to do the best we can, and new contributors try to keep 
> pinging their discussion threads until someone responds. 
> 
> Your email itself serves as a good wake-up call, and I’m sure that people 
> will take a look at that list of hanging KIPs now. Hopefully, it will also 
> provide a spark that leads someone to propose a process improvement. I’ll 
> certainly be thinking about it myself.
> 
> Thanks again,
> John 
> 
> 
> On Tue, Oct 19, 2021, at 09:07, Knowles Atchison Jr wrote:
>> Good morning,
>> 
>> The current process of KIPs needs to be improved. There are at least a
>> handful of open KIPs with existing PRs that are in a purgatory state. I
>> understand that people are busy, but if you are going to gatekeep Kafka
>> with this process, then it must be responsive. Even if the community
>> decides they do not want the change, the KIP should be addressed and closed
>> out.
>> 
>> The entire wiki page is a graveyard of unresponded KIPs. For some changes,
>> it takes a nontrivial amount of effort to put together the wiki page and
>> one has to essentially write the code implementation hoping that it will be
>> pulled into the codebase. This is very frustrating as an external developer
>> to have put in the work and then effectively be ignored.
>> 
>> We have to maintain a custom build because KIPs are not debated, voted on,
>> or merged in a timely manner.
>> 
>> Knowles



[jira] [Resolved] (KAFKA-13302) [IEP-59] Support not default page size

2021-09-22 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov resolved KAFKA-13302.
-
Resolution: Invalid

Sorry for any inconvinience. This issue should go to Ignite project.

> [IEP-59] Support not default page size
> --
>
> Key: KAFKA-13302
> URL: https://issues.apache.org/jira/browse/KAFKA-13302
> Project: Kafka
>  Issue Type: Improvement
>    Reporter: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-59
>
> Currently, CDC doesn't support not default page size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13302) [IEP-59] Support not default page size

2021-09-15 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-13302:
---

 Summary: [IEP-59] Support not default page size
 Key: KAFKA-13302
 URL: https://issues.apache.org/jira/browse/KAFKA-13302
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov


Currently, CDC doesn't support not default page size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-07-07 Thread Nikolay Izhikov
Hello, James.

> One use case we would like is to require that producers are sending 
> compressed messages.

I think that forcing producers to send compressed messages is out of scope of 
this KIP.


> 7 июля 2021 г., в 08:48, Soumyajit Sahu  написал(а):
> 
> Interesting point. You are correct that at least KIP-729 cannot validate
> that.
> 
> We could propose a different KIP for that which could enforce that in the
> upper layer. Personally, I would be hesitant to discard the data in that
> case, but just use metrics/logs to detect those and inform the producers
> about it.
> 
> 
> On Tue, Jul 6, 2021, 9:13 PM James Cheng  wrote:
> 
>> One use case we would like is to require that producers are sending
>> compressed messages. Would this KIP (or KIP-686) allow the broker to detect
>> that? From looking at both KIPs, it doesn't look it would help with my
>> particular use case. Both of the KIPs are at the Record-level.
>> 
>> Thanks,
>> -James
>> 
>>> On Jun 30, 2021, at 10:05 AM, Soumyajit Sahu 
>> wrote:
>>> 
>>> Hi Nikolay,
>>> Great to hear that. I'm ok with either one too.
>>> I had missed noticing the KIP-686. Thanks for bringing it up.
>>> 
>>> I have tried to keep this one simple, but hope it can cover all our
>>> enterprise needs.
>>> 
>>> Should we put this one for vote?
>>> 
>>> Regards,
>>> Soumyajit
>>> 
>>> 
>>> On Wed, Jun 30, 2021, 8:50 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Team, If we have support from committers for API to check records on the
>>>> broker side let’s choose one KIP to go with and move forward to vote and
>>>> implementation?
>>>> I’m ready to drive implementation of this API.
>>>> 
>>>> I’m ready to drive the implementation of this API.
>>>> It seems very useful to me.
>>>> 
>>>>> 30 июня 2021 г., в 18:04, Nikolay Izhikov 
>>>> написал(а):
>>>>> 
>>>>> Hello.
>>>>> 
>>>>> I had a very similar proposal [1].
>>>>> So, yes, I think we should have one implementation of API in the
>> product.
>>>>> 
>>>>> [1]
>>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
>>>>> 
>>>>>> 30 июня 2021 г., в 17:57, Christopher Shannon <
>>>> christopher.l.shan...@gmail.com> написал(а):
>>>>>> 
>>>>>> I would find this feature very useful as well as adding custom
>>>> validation
>>>>>> to incoming records would be nice to prevent bad data from making it
>> to
>>>> the
>>>>>> topic.
>>>>>> 
>>>>>> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu <
>> soumyajit.s...@gmail.com
>>>>> 
>>>>>> wrote:
>>>>>> 
>>>>>>> Thanks Colin! Good call on the ApiRecordError. We could use
>>>>>>> InvalidRecordException instead, and have the broker convert it
>>>>>>> to ApiRecordError.
>>>>>>> Modified signature below.
>>>>>>> 
>>>>>>> interface BrokerRecordValidator {
>>>>>>> /**
>>>>>>> * Validate the record for a given topic-partition.
>>>>>>> */
>>>>>>> Optional validateRecord(TopicPartition
>>>>>>> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
>>>>>>> }
>>>>>>> 
>>>>>>> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe 
>>>> wrote:
>>>>>>> 
>>>>>>>> Hi Soumyajit,
>>>>>>>> 
>>>>>>>> The difficult thing is deciding which fields to share and how to
>> share
>>>>>>>> them.  Key and value are probably the minimum we need to make this
>>>>>>> useful.
>>>>>>>> If we do choose to go with byte buffer, it is not necessary to also
>>>> pass
>>>>>>>> the size, since ByteBuffer maintains that internally.
>>>>>>>> 
>>>>>>>> ApiRecordError is also an internal class, so it can't be used in a
>>>> public
>>>>>>>> API.  I think most likely if we were going to do this, we would just
>>>>>>> catc

Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-06-30 Thread Nikolay Izhikov
Team, If we have support from committers for API to check records on the broker 
side let’s choose one KIP to go with and move forward to vote and 
implementation?
I’m ready to drive implementation of this API.

I’m ready to drive the implementation of this API.
It seems very useful to me. 

> 30 июня 2021 г., в 18:04, Nikolay Izhikov  написал(а):
> 
> Hello.
> 
> I had a very similar proposal [1].
> So, yes, I think we should have one implementation of API in the product.
> 
> [1] 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
> 
>> 30 июня 2021 г., в 17:57, Christopher Shannon 
>>  написал(а):
>> 
>> I would find this feature very useful as well as adding custom validation
>> to incoming records would be nice to prevent bad data from making it to the
>> topic.
>> 
>> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu 
>> wrote:
>> 
>>> Thanks Colin! Good call on the ApiRecordError. We could use
>>> InvalidRecordException instead, and have the broker convert it
>>> to ApiRecordError.
>>> Modified signature below.
>>> 
>>> interface BrokerRecordValidator {
>>>  /**
>>>   * Validate the record for a given topic-partition.
>>>   */
>>>   Optional validateRecord(TopicPartition
>>> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
>>> }
>>> 
>>> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe  wrote:
>>> 
>>>> Hi Soumyajit,
>>>> 
>>>> The difficult thing is deciding which fields to share and how to share
>>>> them.  Key and value are probably the minimum we need to make this
>>> useful.
>>>> If we do choose to go with byte buffer, it is not necessary to also pass
>>>> the size, since ByteBuffer maintains that internally.
>>>> 
>>>> ApiRecordError is also an internal class, so it can't be used in a public
>>>> API.  I think most likely if we were going to do this, we would just
>>> catch
>>>> an exception and use the exception text as the validation error.
>>>> 
>>>> best,
>>>> Colin
>>>> 
>>>> 
>>>> On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
>>>>> Hi Tom,
>>>>> 
>>>>> Makes sense. Thanks for the explanation. I get what Colin had meant
>>>> earlier.
>>>>> 
>>>>> Would a different signature for the interface work? Example below, but
>>>>> please feel free to suggest alternatives if there are any possibilities
>>>> of
>>>>> such.
>>>>> 
>>>>> If needed, then deprecating this and introducing a new signature would
>>> be
>>>>> straight-forward as both (old and new) calls could be made serially in
>>>> the
>>>>> LogValidator allowing a coexistence for a transition period.
>>>>> 
>>>>> interface BrokerRecordValidator {
>>>>>   /**
>>>>>* Validate the record for a given topic-partition.
>>>>>*/
>>>>>   Optional validateRecord(TopicPartition
>>>> topicPartition,
>>>>> int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
>>>>> headers);
>>>>> }
>>>>> 
>>>>> 
>>>>> On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
>>> wrote:
>>>>> 
>>>>>> Hi Soumyajit,
>>>>>> 
>>>>>> Although that class does indeed have public access at the Java level,
>>>> it
>>>>>> does so only because it needs to be used by internal Kafka code which
>>>> lives
>>>>>> in other packages (there isn't any more restrictive access modifier
>>>> which
>>>>>> would work). What the project considers public Java API is determined
>>>> by
>>>>>> what's included in the published Javadocs:
>>>>>> https://kafka.apache.org/27/javadoc/index.html, which doesn't
>>> include
>>>> the
>>>>>> org.apache.kafka.common.record package.
>>>>>> 
>>>>>> One of the problems with making these internal classes public is it
>>>> ties
>>>>>> the project into supporting them as APIs, which can make changing
>>> them
>>>> much
>>>>>> harder and in the long run that can slow, or even prevent, innovation
>>&

Re: [DISCUSS] KIP-729 Custom validation of records on the broker prior to log append

2021-06-30 Thread Nikolay Izhikov
Hello.

I had a very similar proposal [1].
So, yes, I think we should have one implementation of API in the product.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker

> 30 июня 2021 г., в 17:57, Christopher Shannon 
>  написал(а):
> 
> I would find this feature very useful as well as adding custom validation
> to incoming records would be nice to prevent bad data from making it to the
> topic.
> 
> On Wed, Apr 7, 2021 at 7:03 PM Soumyajit Sahu 
> wrote:
> 
>> Thanks Colin! Good call on the ApiRecordError. We could use
>> InvalidRecordException instead, and have the broker convert it
>> to ApiRecordError.
>> Modified signature below.
>> 
>> interface BrokerRecordValidator {
>>   /**
>>* Validate the record for a given topic-partition.
>>*/
>>Optional validateRecord(TopicPartition
>> topicPartition, ByteBuffer key, ByteBuffer value, Header[] headers);
>> }
>> 
>> On Tue, Apr 6, 2021 at 5:09 PM Colin McCabe  wrote:
>> 
>>> Hi Soumyajit,
>>> 
>>> The difficult thing is deciding which fields to share and how to share
>>> them.  Key and value are probably the minimum we need to make this
>> useful.
>>> If we do choose to go with byte buffer, it is not necessary to also pass
>>> the size, since ByteBuffer maintains that internally.
>>> 
>>> ApiRecordError is also an internal class, so it can't be used in a public
>>> API.  I think most likely if we were going to do this, we would just
>> catch
>>> an exception and use the exception text as the validation error.
>>> 
>>> best,
>>> Colin
>>> 
>>> 
>>> On Tue, Apr 6, 2021, at 15:57, Soumyajit Sahu wrote:
 Hi Tom,
 
 Makes sense. Thanks for the explanation. I get what Colin had meant
>>> earlier.
 
 Would a different signature for the interface work? Example below, but
 please feel free to suggest alternatives if there are any possibilities
>>> of
 such.
 
 If needed, then deprecating this and introducing a new signature would
>> be
 straight-forward as both (old and new) calls could be made serially in
>>> the
 LogValidator allowing a coexistence for a transition period.
 
 interface BrokerRecordValidator {
/**
 * Validate the record for a given topic-partition.
 */
Optional validateRecord(TopicPartition
>>> topicPartition,
 int keySize, ByteBuffer key, int valueSize, ByteBuffer value, Header[]
 headers);
 }
 
 
 On Tue, Apr 6, 2021 at 12:54 AM Tom Bentley 
>> wrote:
 
> Hi Soumyajit,
> 
> Although that class does indeed have public access at the Java level,
>>> it
> does so only because it needs to be used by internal Kafka code which
>>> lives
> in other packages (there isn't any more restrictive access modifier
>>> which
> would work). What the project considers public Java API is determined
>>> by
> what's included in the published Javadocs:
> https://kafka.apache.org/27/javadoc/index.html, which doesn't
>> include
>>> the
> org.apache.kafka.common.record package.
> 
> One of the problems with making these internal classes public is it
>>> ties
> the project into supporting them as APIs, which can make changing
>> them
>>> much
> harder and in the long run that can slow, or even prevent, innovation
>>> in
> the rest of Kafka.
> 
> Kind regards,
> 
> Tom
> 
> 
> 
> On Sun, Apr 4, 2021 at 7:31 PM Soumyajit Sahu <
>>> soumyajit.s...@gmail.com>
> wrote:
> 
>> Hi Colin,
>> I see that both the interface "Record" and the implementation
>> "DefaultRecord" being used in LogValidator.java are public
>> interfaces/classes.
>> 
>> 
>> 
> 
>>> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/Records.java
>> and
>> 
>> 
> 
>>> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/record/DefaultRecord.java
>> 
>> So, it should be ok to use them. Let me know what you think.
>> 
>> Thanks,
>> Soumyajit
>> 
>> 
>> On Fri, Apr 2, 2021 at 8:51 AM Colin McCabe 
>>> wrote:
>> 
>>> Hi Soumyajit,
>>> 
>>> I believe we've had discussions about proposals similar to this
>>> before,
>>> although I'm having trouble finding one right now.  The issue
>> here
>>> is
>> that
>>> Record is a private class -- it is not part of any public API,
>> and
>>> may
>>> change at any time.  So we can't expose it in public APIs.
>>> 
>>> best,
>>> Colin
>>> 
>>> 
>>> On Thu, Apr 1, 2021, at 14:18, Soumyajit Sahu wrote:
 Hello All,
 I would like to start a discussion on the KIP-729.
 
 
>>> 
>> 
> 
>>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-729%3A+Custom+validation+of+records+on+the+broker+prior+to+log+append
 
 Thanks!
 

Re: [VOTE] KIP-700: Add Describe Cluster API

2021-01-07 Thread Nikolay Izhikov
+1

> 6 янв. 2021 г., в 16:53, David Jacot  написал(а):
> 
> Hi all,
> 
> I'd like to start the vote on KIP-700: Add Describe Cluster API. This KIP
> is here:
> https://cwiki.apache.org/confluence/x/jQ4mCg
> 
> Please take a look and vote if you can.
> 
> Best,
> David



Re: [DISCUSS] KIP-687: Automatic Reloading of Security Store

2020-12-04 Thread Nikolay Izhikov
Hello, Boyang Chen.

I think this KIP overlaps with my idea [1] of exposing information about 
certificates Kafka uses.
Kafka administrator should initiate renewal certificates procedure not long 
before the certificate expires.
But, for now, there is no way for administrators to know the expiration date of 
the certificate.
My proposal is to expose this info in the describe command.

What do you think?
Do we need it?

[1] https://mail-archives.apache.org/mod_mbox/kafka-dev/202012.mbox/browser


> 4 дек. 2020 г., в 15:00, Noa Resare  написал(а):
> 
> Hi Boyang,
> 
> I think that it would improve the ergonomics of dealing with short lived 
> certificates to have this be the default behaviour.
> 
> It should be noted that transparently reloading certificates and keys when 
> they changed on disk can be implemented right now registering a custom 
> KeyManagerFactory, but to say that the JDK is designed to make this easy 
> would be an overstatement. The things that we do to get this working:
> 
> 1. Create a class implementing SecurityProviderCreator that will return a 
> Provider that registers a custom KeyManagerFactory implementation.
> 2. This custom KeyManagerFactory would return KeyManager instances that 
> implements X509ExendedKeyManager
> 3. The custom KeyManager would return cached but up to date values for the 
> getCertificateChain() and getPrivateKey() methods.
> 5. Configure Kafka with security.providers referencing the class defined in 1)
> 
> This is not something I would wish upon anyone, but it works. Solving this 
> for everyone inside Apache Kafka seems like a much preferred solution.
> 
> Cheers
> noa
> 
> ps. It seems my apple.com  email address ends up on the 
> list as apple.com .INVALID. Is this a known problem? For 
> now I’m working around it by using my personal email.
> 
>> On 4 Dec 2020, at 01:28, Boyang Chen  wrote:
>> 
>> Hey there,
>> 
>> I would like to start the discussion thread for KIP-687:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-687%3A+Automatic+Reloading+of+Security+Store
>> 
>> This KIP is trying to deprecate the AlterConfigs API support of updating
>> the security store by reloading path in-place, and replace with a
>> file-watch mechanism inside the broker. Let me know what you think.
>> 
>> Best,
>> Boyang
> 



Re: Config command to describe SSL certificate paramters

2020-12-03 Thread Nikolay Izhikov
Hello, Igor.

Yes, we can.
But, it requires access 
a. To the broker server via SSH
b. To the JKS file itself: One who wants to get params must know JKS 
password and has read permission for the file.

It seems to me that this kind of permissions is too high for a simple «know 
when cert will expire» task.

My idea is to expose SSL param with AdminCommand so they can be easily obtained
and used in some kind of automation or alerting or third-party UI tool.

What do you think?

> 3 дек. 2020 г., в 12:32, Igor Soarez  написал(а):
> 
> Hi Nikolay,
> 
> You can use OpenSSL s_client to check all these things.
> 
> https://www.openssl.org/docs/manmaster/man1/s_client.html
> 
> --
> Igor
> 
> On Wed, Dec 2, 2020, at 5:44 PM, Nikolay Izhikov wrote:
>> Hello.
>> 
>> Kafka has an ability to configure SSL connections between brokers and 
>> clients.
>> SSL certificates has different params such as
>>  *   issuer
>>  *   CN
>>  *   validity date 
>> and so on.
>> 
>> Values of these parameters important during maintenance:
>>  *   checking correctness of deployment
>>  *   planning for certification renewal (validity date)
>> 
>> AFAIK, Kafka doesn’t have a standard way to expose parameters of 
>> configured SSL certificates.
>> 
>> I think we can return those parameters as a result of some Admin command.
>> 
>> `./bin/kafka-configs.sh —entity-type ssl-certificates —describe` 
>> 
>> What do you think?
>> I can create KIP if this idea is supported by the community.



Config command to describe SSL certificate paramters

2020-12-02 Thread Nikolay Izhikov
Hello.

Kafka has an ability to configure SSL connections between brokers and clients.
SSL certificates has different params such as
*   issuer
*   CN
*   validity date 
and so on.

Values of these parameters important during maintenance:
*   checking correctness of deployment
*   planning for certification renewal (validity date)

AFAIK, Kafka doesn’t have a standard way to expose parameters of configured SSL 
certificates.

I think we can return those parameters as a result of some Admin command.

`./bin/kafka-configs.sh —entity-type ssl-certificates —describe` 

What do you think?
I can create KIP if this idea is supported by the community.

Re: [DISCUSSION] KIP-686: API to ensure Records policy on the broker

2020-12-02 Thread Nikolay Izhikov
Hello, Paul.

Thanks for the feedback!

> How does the producer get notified of a failure to pass the RecordPolicy for 
> one or more records, 

The producer will receive `PolicyViolationException`.

> how should it recover?

Obvious answers are

Producer should switch to the correct schema OR producer should be stopped 
abnormally.

> Assuming a RecordPolicy can be loaded by a broker without restarting it, what 
> is the mechanism by which this happens?

Thanks for the good question:

Think we should choose from one of the following alternatives:

1. We allow the users to use any `RecordsPolicy` implementation.
In this case, Kafka administrator is responsible for putting a 
custom jar with the `RecordsPolicy` implementation to every Kafka Broker 
classpath(libs directory).
AFAIK this selected as a base scenario for an `Authorizer` 
implementation.

2. We allow the users to select implementation from some predefined 
list that Kafka developers included in some release.
In this case, every Kafka broker will have a specific 
implementation from the Kafka release itself.
We can go with this because wrong `RecordsPolicy` 
implementation can affect broker stability and performance.

I, personally, prefer first choice.

> Must writes to replicas also adhere to the RecordPolicy?

I think we should check only on the leader.

> Must already-written written records adhere to RecordPolicy, if it is added 
> later?

No.

> managing schema outside of kafka itself using something like the confluent 
> schema registry.  
> Maybe you can say why RecordPolicy would be better?

1. Can't agree that a commercial product is an alternative to the proposed 
open-source API.
Moreover, I propose to add an API that has a little overlap with such a big 
product as a Schema Registry as a whole.

2. AFAIU Confluent Schema Registry should use a similar technique to ensure 
records schema in the topic.
My understanding based on Schema Registry docs [1]. Specifically:
- Confluent Schema Registry has custom topic configuration options to 
enable or disable schema checks.
- "With this configuration, if a message is produced to the topic 
my-topic-sv that does not have a valid schema for the value of the message, an 
error is returned to the producer, and the message is discarded."

[1] 
https://docs.confluent.io/platform/current/schema-registry/schema-validation.html


> 1 дек. 2020 г., в 06:15, Paul Whalen  написал(а):
> 
> Nikolay,
> 
> I'm not a committer, but perhaps I can start the discussion.  I've had the
> urge for a similar feature after being bitten by writing a poorly formed
> record to a topic - it's natural to want to push schema validation into the
> broker, since that's the way regular databases work.  But I'm a bit
> skeptical of the complexity it introduces.  Some questions I think would
> have to be answered that aren't currently in the KIP:
> - How does the producer get notified of a failure to pass the RecordPolicy
> for one or more records, and how should it recover?
> - Assuming a RecordPolicy can be loaded by a broker without restarting it,
> what is the mechanism by which this happens?
> - Must writes to replicas also adhere to the RecordPolicy?
> - Must already-written written records adhere to RecordPolicy, if it is
> added later?
> 
> Also, the rejected alternatives section is blank - I see the status quo as
> at least one alternative, in particular, managing schema outside of kafka
> itself using something like the confluent schema registry.  Maybe you can
> say why RecordPolicy would be better?
> 
> Best,
> Paul
> 
> On Mon, Nov 30, 2020 at 9:58 AM Nikolay Izhikov  wrote:
> 
>> Friendly bump.
>> 
>> Please, share your feedback.
>> Do we need those feature in the Kafka?
>> 
>>> 23 нояб. 2020 г., в 12:09, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello!
>>> 
>>> Any additional feedback on this KIP?
>>> I believe this API can be useful for Kafka users.
>>> 
>>> 
>>>> 18 нояб. 2020 г., в 14:47, Nikolay Izhikov 
>> написал(а):
>>>> 
>>>> Hello, Ismael.
>>>> 
>>>> Thanks for the feedback.
>>>> You are right, I read public interfaces definition not carefully :)
>>>> 
>>>> Updated KIP according to your objection.
>>>> I propose to expose 2 new public interfaces:
>>>> 
>>>> ```
>>>> package org.apache.kafka.common;
>>>> 
>>>> public interface Record {
>>>>  long timestamp();
>>>> 
>>>>  boolean hasKey();
>>>> 
>>>>  ByteBuffer key();
>>>> 
>>>&

Re: [DISCUSSION] KIP-686: API to ensure Records policy on the broker

2020-11-30 Thread Nikolay Izhikov
Friendly bump.

Please, share your feedback.
Do we need those feature in the Kafka?

> 23 нояб. 2020 г., в 12:09, Nikolay Izhikov  
> написал(а):
> 
> Hello!
> 
> Any additional feedback on this KIP?
> I believe this API can be useful for Kafka users.
> 
> 
>> 18 нояб. 2020 г., в 14:47, Nikolay Izhikov  
>> написал(а):
>> 
>> Hello, Ismael.
>> 
>> Thanks for the feedback.
>> You are right, I read public interfaces definition not carefully :)
>> 
>> Updated KIP according to your objection.
>> I propose to expose 2 new public interfaces:
>> 
>> ```
>> package org.apache.kafka.common;
>> 
>> public interface Record {
>>   long timestamp();
>> 
>>   boolean hasKey();
>> 
>>   ByteBuffer key();
>> 
>>   boolean hasValue();
>> 
>>   ByteBuffer value();
>> 
>>   Header[] headers();
>> }
>> 
>> package org.apache.kafka.server.policy;
>> 
>> public interface RecordsPolicy extends Configurable, AutoCloseable {
>>   void validate(String topic, int partition, Iterable 
>> records) throws PolicyViolationException;
>> }
>> ```
>> 
>> Data exposed in Record and in validate method itself seems to enough for 
>> implementation of any reasonable Policy.
>> 
>>> 17 нояб. 2020 г., в 19:44, Ismael Juma  написал(а):
>>> 
>>> Thanks for the KIP. The policy interface is a small part of this. You also
>>> have to describe the new public API that will be exposed as part of this.
>>> For example, there is no public `Records` class.
>>> 
>>> Ismael
>>> 
>>> On Tue, Nov 17, 2020 at 8:24 AM Nikolay Izhikov  wrote:
>>> 
>>>> Hello.
>>>> 
>>>> I want to start discussion of the KIP-686 [1].
>>>> I propose to introduce the new public interface for it RecordsPolicy:
>>>> 
>>>> ```
>>>> public interface RecordsPolicy extends Configurable, AutoCloseable {
>>>> void validate(String topic, Records records) throws
>>>> PolicyViolationException;
>>>> }
>>>> ```
>>>> 
>>>> and a two new configuration options:
>>>>  * `records.policy.class.name: String` - sets class name of the
>>>> implementation of RecordsPolicy for the specific topic.
>>>>  * `records.policy.enabled: Boolean` - enable or disable records policy
>>>> for the topic.
>>>> 
>>>> If `records.policy.enabled=true` then an instance of the `RecordsPolicy`
>>>> should check each Records batch before applying data to the log.
>>>> If `PolicyViolationException`  thrown from the `RecordsPolicy#validate`
>>>> method then no data added to the log and the client receives an error.
>>>> 
>>>> Motivation:
>>>> 
>>>> During the adoption of Kafka in large enterprises, it's important to
>>>> guarantee data in some topic conforms to the specific format.
>>>> When data are written and read by the different applications developed by
>>>> the different teams it's hard to guarantee data format using only custom
>>>> SerDe, because malicious applications can use different SerDe.
>>>> The data format can be enforced only on the broker side.
>>>> 
>>>> Please, share your feedback.
>>>> 
>>>> [1]
>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
>> 
> 



Re: [DISCUSSION] KIP-686: API to ensure Records policy on the broker

2020-11-23 Thread Nikolay Izhikov
Hello!

Any additional feedback on this KIP?
I believe this API can be useful for Kafka users.


> 18 нояб. 2020 г., в 14:47, Nikolay Izhikov  
> написал(а):
> 
> Hello, Ismael.
> 
> Thanks for the feedback.
> You are right, I read public interfaces definition not carefully :)
> 
> Updated KIP according to your objection.
> I propose to expose 2 new public interfaces:
> 
> ```
> package org.apache.kafka.common;
> 
> public interface Record {
>long timestamp();
> 
>boolean hasKey();
> 
>ByteBuffer key();
> 
>boolean hasValue();
> 
>ByteBuffer value();
> 
>Header[] headers();
> }
> 
> package org.apache.kafka.server.policy;
> 
> public interface RecordsPolicy extends Configurable, AutoCloseable {
>void validate(String topic, int partition, Iterable 
> records) throws PolicyViolationException;
> }
> ```
> 
> Data exposed in Record and in validate method itself seems to enough for 
> implementation of any reasonable Policy.
> 
>> 17 нояб. 2020 г., в 19:44, Ismael Juma  написал(а):
>> 
>> Thanks for the KIP. The policy interface is a small part of this. You also
>> have to describe the new public API that will be exposed as part of this.
>> For example, there is no public `Records` class.
>> 
>> Ismael
>> 
>> On Tue, Nov 17, 2020 at 8:24 AM Nikolay Izhikov  wrote:
>> 
>>> Hello.
>>> 
>>> I want to start discussion of the KIP-686 [1].
>>> I propose to introduce the new public interface for it RecordsPolicy:
>>> 
>>> ```
>>> public interface RecordsPolicy extends Configurable, AutoCloseable {
>>>  void validate(String topic, Records records) throws
>>> PolicyViolationException;
>>> }
>>> ```
>>> 
>>> and a two new configuration options:
>>>   * `records.policy.class.name: String` - sets class name of the
>>> implementation of RecordsPolicy for the specific topic.
>>>   * `records.policy.enabled: Boolean` - enable or disable records policy
>>> for the topic.
>>> 
>>> If `records.policy.enabled=true` then an instance of the `RecordsPolicy`
>>> should check each Records batch before applying data to the log.
>>> If `PolicyViolationException`  thrown from the `RecordsPolicy#validate`
>>> method then no data added to the log and the client receives an error.
>>> 
>>> Motivation:
>>> 
>>> During the adoption of Kafka in large enterprises, it's important to
>>> guarantee data in some topic conforms to the specific format.
>>> When data are written and read by the different applications developed by
>>> the different teams it's hard to guarantee data format using only custom
>>> SerDe, because malicious applications can use different SerDe.
>>> The data format can be enforced only on the broker side.
>>> 
>>> Please, share your feedback.
>>> 
>>> [1]
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker
> 



Re: [DISCUSSION] KIP-686: API to ensure Records policy on the broker

2020-11-18 Thread Nikolay Izhikov
Hello, Ismael.

Thanks for the feedback.
You are right, I read public interfaces definition not carefully :)

Updated KIP according to your objection.
I propose to expose 2 new public interfaces:

```
package org.apache.kafka.common;

public interface Record {
long timestamp();

boolean hasKey();

ByteBuffer key();

boolean hasValue();

ByteBuffer value();

Header[] headers();
}

package org.apache.kafka.server.policy;

public interface RecordsPolicy extends Configurable, AutoCloseable {
void validate(String topic, int partition, Iterable 
records) throws PolicyViolationException;
}
```

Data exposed in Record and in validate method itself seems to enough for 
implementation of any reasonable Policy.

> 17 нояб. 2020 г., в 19:44, Ismael Juma  написал(а):
> 
> Thanks for the KIP. The policy interface is a small part of this. You also
> have to describe the new public API that will be exposed as part of this.
> For example, there is no public `Records` class.
> 
> Ismael
> 
> On Tue, Nov 17, 2020 at 8:24 AM Nikolay Izhikov  wrote:
> 
>> Hello.
>> 
>> I want to start discussion of the KIP-686 [1].
>> I propose to introduce the new public interface for it RecordsPolicy:
>> 
>> ```
>> public interface RecordsPolicy extends Configurable, AutoCloseable {
>>   void validate(String topic, Records records) throws
>> PolicyViolationException;
>> }
>> ```
>> 
>> and a two new configuration options:
>>* `records.policy.class.name: String` - sets class name of the
>> implementation of RecordsPolicy for the specific topic.
>>* `records.policy.enabled: Boolean` - enable or disable records policy
>> for the topic.
>> 
>> If `records.policy.enabled=true` then an instance of the `RecordsPolicy`
>> should check each Records batch before applying data to the log.
>> If `PolicyViolationException`  thrown from the `RecordsPolicy#validate`
>> method then no data added to the log and the client receives an error.
>> 
>> Motivation:
>> 
>> During the adoption of Kafka in large enterprises, it's important to
>> guarantee data in some topic conforms to the specific format.
>> When data are written and read by the different applications developed by
>> the different teams it's hard to guarantee data format using only custom
>> SerDe, because malicious applications can use different SerDe.
>> The data format can be enforced only on the broker side.
>> 
>> Please, share your feedback.
>> 
>> [1]
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker



[DISCUSSION] KIP-686: API to ensure Records policy on the broker

2020-11-17 Thread Nikolay Izhikov
Hello.

I want to start discussion of the KIP-686 [1].
I propose to introduce the new public interface for it RecordsPolicy:

```
public interface RecordsPolicy extends Configurable, AutoCloseable {
   void validate(String topic, Records records) throws PolicyViolationException;
}
```

and a two new configuration options:
* `records.policy.class.name: String` - sets class name of the 
implementation of RecordsPolicy for the specific topic.
* `records.policy.enabled: Boolean` - enable or disable records policy for 
the topic.

If `records.policy.enabled=true` then an instance of the `RecordsPolicy` should 
check each Records batch before applying data to the log.
If `PolicyViolationException`  thrown from the `RecordsPolicy#validate` method 
then no data added to the log and the client receives an error.

Motivation: 

During the adoption of Kafka in large enterprises, it's important to guarantee 
data in some topic conforms to the specific format.
When data are written and read by the different applications developed by the 
different teams it's hard to guarantee data format using only custom SerDe, 
because malicious applications can use different SerDe. 
The data format can be enforced only on the broker side.

Please, share your feedback.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-686%3A+API+to+ensure+Records+policy+on+the+broker

[jira] [Created] (KAFKA-10732) API to ensure Records policy on the broker

2020-11-17 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-10732:
---

 Summary: API to ensure Records policy on the broker
 Key: KAFKA-10732
 URL: https://issues.apache.org/jira/browse/KAFKA-10732
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


During the adoption of Kafka in large enterprises, it's important to guarantee 
data in some topic conforms to the specific format.

When data are written and read by the different applications developed by the 
different teams it's hard to guarantee data format using only custom SerDe, 
because malicious applications can use different SerDe.

The data format can be enforced only on the broker side.

I propose to introduce the new public interface for it RecordsPolicy:

{noformat}
public interface RecordsPolicy extends Configurable, AutoCloseable {
   void validate(String topic, Records records) throws PolicyViolationException;
}
{noformat}

and a two new configuration options:

* {{records.policy.class.name: String}} - sets class name of the implementation 
of RecordsPolicy for the specific topic.
* {{records.policy.enabled: Boolean}}  - enable or disable records policy for 
the topic





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSSION] python code style checks

2020-10-09 Thread Nikolay Izhikov
Hello!

Kafka uses relatively strict code style for Java code.
Code style enforces during project build.

But, for now, we doesn’t check python test code style.
I’ve checked system tests code with the default pylint settings and got the 
following results - "Your code has been rated at 5.98/10»

I propose to add python code style checks to the codebase and process and fix 
code style issues.

What do you think?

Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-08 Thread Nikolay Izhikov
Guozhang and all others who involved.
Thanks for your help!


> 7 окт. 2020 г., в 19:42, Guozhang Wang  написал(а):
> 
> Hello Nikolay,
> 
> I've merged the PR to trunk. Thanks for your huge effort and patience going
> through the review!
> 
> Guozhang
> 
> On Wed, Oct 7, 2020 at 6:52 AM Nikolay Izhikov  wrote:
> 
>> Great news!
>> Thanks Magnus!
>> 
>> I’ve updated the PR.
>> 
>> Looks like we ready to merge it.
>> 
>>> 7 окт. 2020 г., в 15:29, Magnus Edenhill 
>> написал(а):
>>> 
>>> Hi,
>>> 
>>> ducktape v0.8.0 is now released.
>>> 
>>> Regards,
>>> Magnus
>>> 
>>> 
>>> Den ons 7 okt. 2020 kl 10:50 skrev Nikolay Izhikov >> :
>>> 
>>>> Hello.
>>>> 
>>>> Got 4 approvals for PR [1]
>>>> The only thing we need to be able to merge it is a ducktape 0.8 release.
>>>> If ducktape team need any help with the release, please, let me know.
>>>> 
>>>> [1] https://github.com/apache/kafka/pull/9196
>>>> 
>>>> 
>>>>> 21 сент. 2020 г., в 12:58, Nikolay Izhikov 
>>>> написал(а):
>>>>> 
>>>>> Hello.
>>>>> 
>>>>> I fixed two system tests that fails in trunk, also.
>>>>> 
>>>>> 
>> streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
>>>>> streams_static_membership_test.py
>>>>> 
>>>>> Please, take a look at my PR [1]
>>>>> 
>>>>> [1] https://github.com/apache/kafka/pull/9312
>>>>> 
>>>>>> 20 сент. 2020 г., в 06:11, Guozhang Wang 
>>>> написал(а):
>>>>>> 
>>>>>> I've triggered a system test on top of your branch.
>>>>>> 
>>>>>> Maybe you could also re-run the jenkins unit tests since currently all
>>>> of
>>>>>> them fails but you've only touched on system tests, so I'd like to
>>>> confirm
>>>>>> at least one successful run.
>>>>>> 
>>>>>> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov 
>>>> wrote:
>>>>>> 
>>>>>>> Hello, Guozhang.
>>>>>>> 
>>>>>>>> I can help run the test suite once your PR is cleanly rebased to
>>>> verify
>>>>>>> the whole suite works
>>>>>>> 
>>>>>>> Thank you for joining to the review.
>>>>>>> 
>>>>>>> 1. PR rebased on the current trunk.
>>>>>>> 
>>>>>>> 2. I triggered all tests in my private environment to verify them
>> after
>>>>>>> rebase.
>>>>>>> Will inform you once tests passed on my environment.
>>>>>>> 
>>>>>>> 3. We need a new ducktape release [1] to be able to merge PR [2].
>>>>>>> For now, PR based on the ducktape trunk branch [3], not some
>>>>>>> specific release.
>>>>>>> If ducktape team need any help with the release, please, let me
>>>>>>> know.
>>>>>>> 
>>>>>>> [1] https://github.com/confluentinc/ducktape/issues/245
>>>>>>> [2] https://github.com/apache/kafka/pull/9196
>>>>>>> [3]
>>>>>>> 
>>>> 
>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>>>>>>> 
>>>>>>>> 16 сент. 2020 г., в 07:32, Guozhang Wang 
>>>>>>> написал(а):
>>>>>>>> 
>>>>>>>> Hello Nikolay,
>>>>>>>> 
>>>>>>>> I can help run the test suite once your PR is cleanly rebased to
>>>> verify
>>>>>>> the
>>>>>>>> whole suite works and then I can merge (I'm trusting Ivan and Magnus
>>>> here
>>>>>>>> for their reviews :)
>>>>>>>> 
>>>>>>>> Guozhang
>>>>>>>> 
>>>>>>>> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov <
>> nizhi...@apache.org>
>>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Hello!
>>>>>>>>> 
>>>>>>>>> I got 2 appr

Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-07 Thread Nikolay Izhikov
Great news! 
Thanks Magnus!

I’ve updated the PR.

Looks like we ready to merge it.

> 7 окт. 2020 г., в 15:29, Magnus Edenhill  написал(а):
> 
> Hi,
> 
> ducktape v0.8.0 is now released.
> 
> Regards,
> Magnus
> 
> 
> Den ons 7 okt. 2020 kl 10:50 skrev Nikolay Izhikov :
> 
>> Hello.
>> 
>> Got 4 approvals for PR [1]
>> The only thing we need to be able to merge it is a ducktape 0.8 release.
>> If ducktape team need any help with the release, please, let me know.
>> 
>> [1] https://github.com/apache/kafka/pull/9196
>> 
>> 
>>> 21 сент. 2020 г., в 12:58, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello.
>>> 
>>> I fixed two system tests that fails in trunk, also.
>>> 
>>> streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
>>> streams_static_membership_test.py
>>> 
>>> Please, take a look at my PR [1]
>>> 
>>> [1] https://github.com/apache/kafka/pull/9312
>>> 
>>>> 20 сент. 2020 г., в 06:11, Guozhang Wang 
>> написал(а):
>>>> 
>>>> I've triggered a system test on top of your branch.
>>>> 
>>>> Maybe you could also re-run the jenkins unit tests since currently all
>> of
>>>> them fails but you've only touched on system tests, so I'd like to
>> confirm
>>>> at least one successful run.
>>>> 
>>>> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov 
>> wrote:
>>>> 
>>>>> Hello, Guozhang.
>>>>> 
>>>>>> I can help run the test suite once your PR is cleanly rebased to
>> verify
>>>>> the whole suite works
>>>>> 
>>>>> Thank you for joining to the review.
>>>>> 
>>>>> 1. PR rebased on the current trunk.
>>>>> 
>>>>> 2. I triggered all tests in my private environment to verify them after
>>>>> rebase.
>>>>>  Will inform you once tests passed on my environment.
>>>>> 
>>>>> 3. We need a new ducktape release [1] to be able to merge PR [2].
>>>>>  For now, PR based on the ducktape trunk branch [3], not some
>>>>> specific release.
>>>>>  If ducktape team need any help with the release, please, let me
>>>>> know.
>>>>> 
>>>>> [1] https://github.com/confluentinc/ducktape/issues/245
>>>>> [2] https://github.com/apache/kafka/pull/9196
>>>>> [3]
>>>>> 
>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>>>>> 
>>>>>> 16 сент. 2020 г., в 07:32, Guozhang Wang 
>>>>> написал(а):
>>>>>> 
>>>>>> Hello Nikolay,
>>>>>> 
>>>>>> I can help run the test suite once your PR is cleanly rebased to
>> verify
>>>>> the
>>>>>> whole suite works and then I can merge (I'm trusting Ivan and Magnus
>> here
>>>>>> for their reviews :)
>>>>>> 
>>>>>> Guozhang
>>>>>> 
>>>>>> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
>>>>> wrote:
>>>>>> 
>>>>>>> Hello!
>>>>>>> 
>>>>>>> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
>>>>>>> Committers, please, join the review.
>>>>>>> 
>>>>>>>> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
>>>>>>> написал(а):
>>>>>>>> 
>>>>>>>> Hello!
>>>>>>>> 
>>>>>>>> Just a friendly reminder.
>>>>>>>> 
>>>>>>>> Patch to resolve some kind of technical debt - python2 in system
>> tests
>>>>>>> is ready!
>>>>>>>> Can someone, please, take a look?
>>>>>>>> 
>>>>>>>> https://github.com/apache/kafka/pull/9196
>>>>>>>> 
>>>>>>>>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
>>>>>>> написал(а):
>>>>>>>>> 
>>>>>>>>> Hello!
>>>>>>>>> 
>>>>>>>>> Any feedback on this?
>>>>>>>>> What I should additionally do to prepare system tests migration?
>>>>>>>>&g

Re: [DISCUSSION] Upgrade system tests to python 3

2020-10-07 Thread Nikolay Izhikov
Hello.

Got 4 approvals for PR [1]
The only thing we need to be able to merge it is a ducktape 0.8 release.
 If ducktape team need any help with the release, please, let me know.

[1] https://github.com/apache/kafka/pull/9196


> 21 сент. 2020 г., в 12:58, Nikolay Izhikov  
> написал(а):
> 
> Hello.
> 
> I fixed two system tests that fails in trunk, also.
> 
> streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
> streams_static_membership_test.py
> 
> Please, take a look at my PR [1]
> 
> [1] https://github.com/apache/kafka/pull/9312
> 
>> 20 сент. 2020 г., в 06:11, Guozhang Wang  написал(а):
>> 
>> I've triggered a system test on top of your branch.
>> 
>> Maybe you could also re-run the jenkins unit tests since currently all of
>> them fails but you've only touched on system tests, so I'd like to confirm
>> at least one successful run.
>> 
>> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov  wrote:
>> 
>>> Hello, Guozhang.
>>> 
>>>> I can help run the test suite once your PR is cleanly rebased to verify
>>> the whole suite works
>>> 
>>> Thank you for joining to the review.
>>> 
>>> 1. PR rebased on the current trunk.
>>> 
>>> 2. I triggered all tests in my private environment to verify them after
>>> rebase.
>>>   Will inform you once tests passed on my environment.
>>> 
>>> 3. We need a new ducktape release [1] to be able to merge PR [2].
>>>   For now, PR based on the ducktape trunk branch [3], not some
>>> specific release.
>>>   If ducktape team need any help with the release, please, let me
>>> know.
>>> 
>>> [1] https://github.com/confluentinc/ducktape/issues/245
>>> [2] https://github.com/apache/kafka/pull/9196
>>> [3]
>>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>>> 
>>>> 16 сент. 2020 г., в 07:32, Guozhang Wang 
>>> написал(а):
>>>> 
>>>> Hello Nikolay,
>>>> 
>>>> I can help run the test suite once your PR is cleanly rebased to verify
>>> the
>>>> whole suite works and then I can merge (I'm trusting Ivan and Magnus here
>>>> for their reviews :)
>>>> 
>>>> Guozhang
>>>> 
>>>> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
>>> wrote:
>>>> 
>>>>> Hello!
>>>>> 
>>>>> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
>>>>> Committers, please, join the review.
>>>>> 
>>>>>> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
>>>>> написал(а):
>>>>>> 
>>>>>> Hello!
>>>>>> 
>>>>>> Just a friendly reminder.
>>>>>> 
>>>>>> Patch to resolve some kind of technical debt - python2 in system tests
>>>>> is ready!
>>>>>> Can someone, please, take a look?
>>>>>> 
>>>>>> https://github.com/apache/kafka/pull/9196
>>>>>> 
>>>>>>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
>>>>> написал(а):
>>>>>>> 
>>>>>>> Hello!
>>>>>>> 
>>>>>>> Any feedback on this?
>>>>>>> What I should additionally do to prepare system tests migration?
>>>>>>> 
>>>>>>>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov 
>>>>> написал(а):
>>>>>>>> 
>>>>>>>> Hello.
>>>>>>>> 
>>>>>>>> PR [1] is ready.
>>>>>>>> Please, review.
>>>>>>>> 
>>>>>>>> But, I need help with the two following questions:
>>>>>>>> 
>>>>>>>> 1. We need a new release of ducktape which includes fixes [2], [3]
>>> for
>>>>> python3.
>>>>>>>> I created the issue in ducktape repo [4].
>>>>>>>> Can someone help me with the release?
>>>>>>>> 
>>>>>>>> 2. I know that some companies run system tests for the trunk on a
>>>>> regular bases.
>>>>>>>> Can someone show me some results of these runs?
>>>>>>>> So, I can compare failures in my PR and in the trunk.
>>>>>>>> 
>>>>>>>&

Re: [DISCUSSION] Upgrade system tests to python 3

2020-09-21 Thread Nikolay Izhikov
Hello.

I fixed two system tests that fails in trunk, also.

streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade
streams_static_membership_test.py

Please, take a look at my PR [1]

[1] https://github.com/apache/kafka/pull/9312

> 20 сент. 2020 г., в 06:11, Guozhang Wang  написал(а):
> 
> I've triggered a system test on top of your branch.
> 
> Maybe you could also re-run the jenkins unit tests since currently all of
> them fails but you've only touched on system tests, so I'd like to confirm
> at least one successful run.
> 
> On Wed, Sep 16, 2020 at 3:37 AM Nikolay Izhikov  wrote:
> 
>> Hello, Guozhang.
>> 
>>> I can help run the test suite once your PR is cleanly rebased to verify
>> the whole suite works
>> 
>> Thank you for joining to the review.
>> 
>> 1. PR rebased on the current trunk.
>> 
>> 2. I triggered all tests in my private environment to verify them after
>> rebase.
>>Will inform you once tests passed on my environment.
>> 
>> 3. We need a new ducktape release [1] to be able to merge PR [2].
>>For now, PR based on the ducktape trunk branch [3], not some
>> specific release.
>>If ducktape team need any help with the release, please, let me
>> know.
>> 
>> [1] https://github.com/confluentinc/ducktape/issues/245
>> [2] https://github.com/apache/kafka/pull/9196
>> [3]
>> https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39
>> 
>>> 16 сент. 2020 г., в 07:32, Guozhang Wang 
>> написал(а):
>>> 
>>> Hello Nikolay,
>>> 
>>> I can help run the test suite once your PR is cleanly rebased to verify
>> the
>>> whole suite works and then I can merge (I'm trusting Ivan and Magnus here
>>> for their reviews :)
>>> 
>>> Guozhang
>>> 
>>> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Hello!
>>>> 
>>>> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
>>>> Committers, please, join the review.
>>>> 
>>>>> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
>>>> написал(а):
>>>>> 
>>>>> Hello!
>>>>> 
>>>>> Just a friendly reminder.
>>>>> 
>>>>> Patch to resolve some kind of technical debt - python2 in system tests
>>>> is ready!
>>>>> Can someone, please, take a look?
>>>>> 
>>>>> https://github.com/apache/kafka/pull/9196
>>>>> 
>>>>>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
>>>> написал(а):
>>>>>> 
>>>>>> Hello!
>>>>>> 
>>>>>> Any feedback on this?
>>>>>> What I should additionally do to prepare system tests migration?
>>>>>> 
>>>>>>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov 
>>>> написал(а):
>>>>>>> 
>>>>>>> Hello.
>>>>>>> 
>>>>>>> PR [1] is ready.
>>>>>>> Please, review.
>>>>>>> 
>>>>>>> But, I need help with the two following questions:
>>>>>>> 
>>>>>>> 1. We need a new release of ducktape which includes fixes [2], [3]
>> for
>>>> python3.
>>>>>>> I created the issue in ducktape repo [4].
>>>>>>> Can someone help me with the release?
>>>>>>> 
>>>>>>> 2. I know that some companies run system tests for the trunk on a
>>>> regular bases.
>>>>>>> Can someone show me some results of these runs?
>>>>>>> So, I can compare failures in my PR and in the trunk.
>>>>>>> 
>>>>>>> Results [5] of run all for my PR available in the ticket [6]
>>>>>>> 
>>>>>>> ```
>>>>>>> SESSION REPORT (ALL TESTS)
>>>>>>> ducktape version: 0.8.0
>>>>>>> session_id:   2020-08-23--002
>>>>>>> run time: 1010 minutes 46.483 seconds
>>>>>>> tests run:684
>>>>>>> passed:   505
>>>>>>> failed:   9
>>>>>>> ignored:  170
>>>>>>> ```
>>>>>>> 
>>>>>>> [1] https://github.com/apache/kafka/pull/9196
>>>>>>> [2]
>>

[jira] [Created] (KAFKA-10505) [SystemTests] streams_static_membership.py and streams_upgradet_test.py fails

2020-09-21 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-10505:
---

 Summary: [SystemTests] streams_static_membership.py and 
streams_upgradet_test.py fails
 Key: KAFKA-10505
 URL: https://issues.apache.org/jira/browse/KAFKA-10505
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov


Two system tests fails with the same reason:

streams_static_membership_test.py:

{noformat}
[INFO:2020-09-21 00:33:45,803]: RunnerClient: Loading test {'directory': 
'/opt/kafka-dev/tests/kafkatest/tests/streams', 'file_name': 
'streams_static_membership_test.py', 'cls_name': 'StreamsStaticMembershipTest', 
'method_name': 
'test_rolling_bounces_will_not_trigger_rebalance_under_static_membership', 
'injected_args': None}
[INFO:2020-09-21 00:33:45,818]: RunnerClient: 
kafkatest.tests.streams.streams_static_membership_test.StreamsStaticMembershipTest.test_rolling_bounces_will_not_trigger_rebalance_under_static_membership:
 Setting up...
[INFO:2020-09-21 00:33:45,819]: RunnerClient: 
kafkatest.tests.streams.streams_static_membership_test.StreamsStaticMembershipTest.test_rolling_bounces_will_not_trigger_rebalance_under_static_membership:
 Running...
[INFO:2020-09-21 00:35:24,906]: RunnerClient: 
kafkatest.tests.streams.streams_static_membership_test.StreamsStaticMembershipTest.test_rolling_bounces_will_not_trigger_rebalance_under_static_membership:
 FAIL: invalid literal for int() with base 10: 
"Generation{generationId=5,memberId='consumer-A-3-a9a925b2-2875-4756-8649-49f516b6cae1',protocol='stream'}\n"
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
134, in run
data = self.run_test()
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
192, in run_test
return self.test_context.function(self.test)
  File 
"/opt/kafka-dev/tests/kafkatest/tests/streams/streams_static_membership_test.py",
 line 88, in 
test_rolling_bounces_will_not_trigger_rebalance_under_static_membership
generation = int(generation)
ValueError: invalid literal for int() with base 10: 
"Generation{generationId=5,memberId='consumer-A-3-a9a925b2-2875-4756-8649-49f516b6cae1',protocol='stream'}\n"
{noformat}

streams_upgrade_test.py::StreamsUpgradeTest.test_version_probing_upgrade

{noformat}
test_id:
kafkatest.tests.streams.streams_upgrade_test.StreamsUpgradeTest.test_version_probing_upgrade
status: FAIL
run time:   1 minute 3.362 seconds


invalid literal for int() with base 10: 
"Generation{generationId=6,memberId='StreamsUpgradeTest-8a6ac110-1c65-40eb-af05-8bee270f1701-StreamThread-1-consumer-207de872-6588-407a-8485-101a19ba2bf0',protocol='stream'}\n"
Traceback (most recent call last):
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
134, in run
data = self.run_test()
  File 
"/usr/local/lib/python3.7/dist-packages/ducktape/tests/runner_client.py", line 
192, in run_test
return self.test_context.function(self.test)
  File "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_upgrade_test.py", 
line 273, in test_version_probing_upgrade
current_generation = self.do_rolling_bounce(p, counter, current_generation)
  File "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_upgrade_test.py", 
line 511, in do_rolling_bounce
processor_generation = self.extract_highest_generation(processor_found)
  File "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_upgrade_test.py", 
line 533, in extract_highest_generation
return int(found_generations[-1])
ValueError: invalid literal for int() with base 10: 
"Generation{generationId=6,memberId='StreamsUpgradeTest-8a6ac110-1c65-40eb-af05-8bee270f1701-StreamThread-1-consumer-207de872-6588-407a-8485-101a19ba2bf0',protocol='stream'}\n"
{noformat}


To fix it we need to extract generationId number from log string



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSSION] Upgrade system tests to python 3

2020-09-16 Thread Nikolay Izhikov
Hello, Guozhang.

> I can help run the test suite once your PR is cleanly rebased to verify the 
> whole suite works

Thank you for joining to the review.

1. PR rebased on the current trunk.

2. I triggered all tests in my private environment to verify them after rebase.
Will inform you once tests passed on my environment.

3. We need a new ducktape release [1] to be able to merge PR [2]. 
For now, PR based on the ducktape trunk branch [3], not some specific 
release.
If ducktape team need any help with the release, please, let me know.

[1] https://github.com/confluentinc/ducktape/issues/245
[2] https://github.com/apache/kafka/pull/9196
[3] 
https://github.com/apache/kafka/pull/9196/files#diff-9235a7bdb1ca9268681c0e56f3f3609bR39

> 16 сент. 2020 г., в 07:32, Guozhang Wang  написал(а):
> 
> Hello Nikolay,
> 
> I can help run the test suite once your PR is cleanly rebased to verify the
> whole suite works and then I can merge (I'm trusting Ivan and Magnus here
> for their reviews :)
> 
> Guozhang
> 
> On Mon, Sep 14, 2020 at 3:56 AM Nikolay Izhikov  wrote:
> 
>> Hello!
>> 
>> I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
>> Committers, please, join the review.
>> 
>>> 3 сент. 2020 г., в 11:06, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello!
>>> 
>>> Just a friendly reminder.
>>> 
>>> Patch to resolve some kind of technical debt - python2 in system tests
>> is ready!
>>> Can someone, please, take a look?
>>> 
>>> https://github.com/apache/kafka/pull/9196
>>> 
>>>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov 
>> написал(а):
>>>> 
>>>> Hello!
>>>> 
>>>> Any feedback on this?
>>>> What I should additionally do to prepare system tests migration?
>>>> 
>>>>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov 
>> написал(а):
>>>>> 
>>>>> Hello.
>>>>> 
>>>>> PR [1] is ready.
>>>>> Please, review.
>>>>> 
>>>>> But, I need help with the two following questions:
>>>>> 
>>>>> 1. We need a new release of ducktape which includes fixes [2], [3] for
>> python3.
>>>>> I created the issue in ducktape repo [4].
>>>>> Can someone help me with the release?
>>>>> 
>>>>> 2. I know that some companies run system tests for the trunk on a
>> regular bases.
>>>>> Can someone show me some results of these runs?
>>>>> So, I can compare failures in my PR and in the trunk.
>>>>> 
>>>>> Results [5] of run all for my PR available in the ticket [6]
>>>>> 
>>>>> ```
>>>>> SESSION REPORT (ALL TESTS)
>>>>> ducktape version: 0.8.0
>>>>> session_id:   2020-08-23--002
>>>>> run time: 1010 minutes 46.483 seconds
>>>>> tests run:684
>>>>> passed:   505
>>>>> failed:   9
>>>>> ignored:  170
>>>>> ```
>>>>> 
>>>>> [1] https://github.com/apache/kafka/pull/9196
>>>>> [2]
>> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
>>>>> [3]
>> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
>>>>> [4] https://github.com/confluentinc/ducktape/issues/245
>>>>> [5]
>> https://issues.apache.org/jira/secure/attachment/13010366/report.txt
>>>>> [6] https://issues.apache.org/jira/browse/KAFKA-10402
>>>>> 
>>>>>> 14 авг. 2020 г., в 21:26, Ismael Juma  написал(а):
>>>>>> 
>>>>>> +1
>>>>>> 
>>>>>> On Fri, Aug 14, 2020 at 7:42 AM John Roesler 
>> wrote:
>>>>>> 
>>>>>>> Thanks Nikolay,
>>>>>>> 
>>>>>>> No objection. This would be very nice to have.
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> John
>>>>>>> 
>>>>>>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>>>>>>>> Hello.
>>>>>>>> 
>>>>>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>>>>>> change.
>>>>>>>> 
>>>>>>>> I’ve created a ticket [1] to upgrade system tests to python3.

Re: [DISCUSSION] Upgrade system tests to python 3

2020-09-14 Thread Nikolay Izhikov
Hello!

I got 2 approvals from Ivan Daschinskiy and Magnus Edenhill.
Committers, please, join the review.

> 3 сент. 2020 г., в 11:06, Nikolay Izhikov  написал(а):
> 
> Hello! 
> 
> Just a friendly reminder.
> 
> Patch to resolve some kind of technical debt - python2 in system tests is 
> ready!
> Can someone, please, take a look?
> 
> https://github.com/apache/kafka/pull/9196
> 
>> 28 авг. 2020 г., в 11:19, Nikolay Izhikov  
>> написал(а):
>> 
>> Hello!
>> 
>> Any feedback on this?
>> What I should additionally do to prepare system tests migration?
>> 
>>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov  
>>> написал(а):
>>> 
>>> Hello.
>>> 
>>> PR [1] is ready.
>>> Please, review.
>>> 
>>> But, I need help with the two following questions:
>>> 
>>> 1. We need a new release of ducktape which includes fixes [2], [3] for 
>>> python3.
>>> I created the issue in ducktape repo [4].
>>> Can someone help me with the release?
>>> 
>>> 2. I know that some companies run system tests for the trunk on a regular 
>>> bases.
>>> Can someone show me some results of these runs?
>>> So, I can compare failures in my PR and in the trunk.
>>> 
>>> Results [5] of run all for my PR available in the ticket [6]
>>> 
>>> ```
>>> SESSION REPORT (ALL TESTS)
>>> ducktape version: 0.8.0
>>> session_id:   2020-08-23--002
>>> run time: 1010 minutes 46.483 seconds
>>> tests run:684
>>> passed:   505
>>> failed:   9
>>> ignored:  170
>>> ```
>>> 
>>> [1] https://github.com/apache/kafka/pull/9196
>>> [2] 
>>> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
>>> [3] 
>>> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
>>> [4] https://github.com/confluentinc/ducktape/issues/245
>>> [5] https://issues.apache.org/jira/secure/attachment/13010366/report.txt
>>> [6] https://issues.apache.org/jira/browse/KAFKA-10402
>>> 
>>>> 14 авг. 2020 г., в 21:26, Ismael Juma  написал(а):
>>>> 
>>>> +1
>>>> 
>>>> On Fri, Aug 14, 2020 at 7:42 AM John Roesler  wrote:
>>>> 
>>>>> Thanks Nikolay,
>>>>> 
>>>>> No objection. This would be very nice to have.
>>>>> 
>>>>> Thanks,
>>>>> John
>>>>> 
>>>>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>>>>>> Hello.
>>>>>> 
>>>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>>>> change.
>>>>>> 
>>>>>> I’ve created a ticket [1] to upgrade system tests to python3.
>>>>>> Does someone have any additional inputs or objections for this change?
>>>>>> 
>>>>>> [1] https://issues.apache.org/jira/browse/KAFKA-10402
>>>>>> 
>>>>>> 
>>>>>>> 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian <
>>>>> gokul24...@gmail.com> написал(а):
>>>>>>> 
>>>>>>> Thanks Colin.
>>>>>>> 
>>>>>>> While at the subject of system tests, there are a few times I see tests
>>>>>>> timed out (even on a large machine such as m5.4xlarge EC2 with Linux).
>>>>> Are
>>>>>>> there any knobs that system tests provide to control timeouts /
>>>>> throughputs
>>>>>>> across all tests?
>>>>>>> Thanks.
>>>>>>> 
>>>>>>> On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe 
>>>>> wrote:
>>>>>>> 
>>>>>>>> Ducktape runs on Python 2.  You can't use it with Python 3, as you are
>>>>>>>> trying to do here.
>>>>>>>> 
>>>>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>>>> change.
>>>>>>>> 
>>>>>>>> Otherwise, using docker as suggested here seems to be the best way to
>>>>> go.
>>>>>>>> 
>>>>>>>> best,
>>>>>>>> Colin
>>>>>>>> 
>>>>>>>> On Mon, Jun 29, 2020, at 02:14,

Re: [DISCUSS] KIP-567: Kafka Cluster Audit

2020-09-07 Thread Nikolay Izhikov
Hello, Viktor.

Do you want to implement the exact approach as it described in the current KIP?
Or you have another proposal on how it has to be implemented?

I abandoned this KIP due to lack of interest from community.
Guess we can collaborate during implementation.

> 7 сент. 2020 г., в 13:13, Viktor Somogyi-Vass  
> написал(а):
> 
> Hi folks,
> 
> It's been a few days since I last pinged and nobody replied so I assume
> that this KIP is abandoned and I can take this over (but please let me know
> if it's not). I will keep the current version of the KIP and move it to a
> sub-page if it's ever needed.
> 
> Thanks,
> Viktor
> 
> On Fri, Aug 28, 2020 at 11:35 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com> wrote:
> 
>> Hi folks,
>> 
>> I have a use-case and a non-trivial implementation with Apache Atlas for
>> this KIP and since this kip seems to be dormant for a while now, I'd take
>> it over and drive it to completion if you don't mind.
>> The current state of the PoC can be found on my fork at
>> https://github.com/viktorsomogyi/kafka/tree/atlas-audit-impl.
>> 
>> Viktor
>> 
>> On Wed, Jan 29, 2020 at 9:32 AM Игорь Мартемьянов 
>> wrote:
>> 
>>> Hello, Nikolai.
>>> 
>>> 
>>> 
 Can you, please, make it more specific?
>>> 
 Why does a business want to have this information?
>>> 
>>> It is very demanded for security department to know who/when/where create
>>> or edit ACL settings. The same situation about topics.
>>> 
>>> 
>>> 
 What are the use-cases for it?
>>> 
>>> This KIP are able to help catching some unauthorized changes of cluster
>>> resources or save history these changes.
>>> 
>>> 
>>> 
 Who will be analyzing these events and how?
>>> 
>>> Events could be analyzed by some security department or developers who
>>> builds applications that can change cluster resources.
>>> 
>>> 
>>> 
 Why it’s not convenient to implement it with third-party tools?
>>> 
>>> It is imposible to my mind getting detailed information about these events
>>> through third-party tools, because we don`t have the ability to catch
>>> these
>>> events outside Kafka core.
>>> 
>>> 
>>> 
 1. `audit` name sounds too general for me. How about `onEvent`?
>>> 
>>> Thanks, good point. I`ll change the method name.
>>> 
>>> 
>>> 
 2. Should we introduce a special marker interface for audit events?
>>> 
 `AuditableRequest`, for example?
>>> 
>>> No, we shouldn`t. We are going to create these events if we process
>>> special
>>> request in KafkaApis class (ApiKeys.CREATE_TOPICS, ApiKeys.DELETE_TOPICS,
>>> ApiKeys.CREATE_ACLS, ApiKeys.DELETE_ACLS, ApiKeys.DESCRIBE_ACLS,
>>> ApiKeys.ALTER_CONFIGS)
>>> 
>>> 
>>> 
> public interface AuditEvent {
>>> 
> String guid();
>>> 
>>> 
>>> 
 Where this `guid` comes from?
>>> 
>>> Guid generated automatically when the audit event is created.
>>> 
>>> 
>>> 
 Will it be the same on each node that receives an auditable event?
>>> 
>>> Event is creating while request that changes cluster resource is processed
>>> and will destroyed after the prosessing will finished on each node.
>>> 
>>> 
>>> 
 Do we have `guid` for any extensions of `AbstractRequest`?
>>> 
>>> No, we don`t.
>>> 
>>> 
>>> 
 If this field is `guid` why do we format this as a String on the API
>>> level?
>>> 
>>> Thanks for the point.
>>> 
>>> I have changed event interface recently.
>>> 
>>> 
>>> 
 Can you, please, add a full list of proposed new configuration
>>> properties
>>> and
>>> 
 examples for each to clarify your intentions?
>>> 
>>> Yes, I have added some examples of new configuration properties.
>>> 
>>> ср, 29 янв. 2020 г., 10:23 Владимир Беруненко :
>>> 
 Hi Nikolai!
 
> Can you, please, make it more specific?
 Why does a business want to have this information?
> What are the use-cases for it?
> Who will be analyzing these events and how?
> Why it’s not convenient to implement it with third-party tools?
 
 This is required by the guys from information security to detect
>>> potential
 threats of violation the rules for providing access to the data layer.
 Analysis in the audit system is usually based on identifying
 uncharacteristic integrations. The key feature of the audit, is that
 cluster administrators do not have access to modification of audit
>>> events.
 Third-party tools are great for data analysis, but its not good idea to
>>> use
 it for audit events collection.
 
> It’s not clear for me where and when AuditEvents will be sent?
 Who will be the receiver of events?
 
 In my opinion, sending an audit event should be initiated when the
>>> broker
 receives a request that matches the audit parameters. Each organization
>>> has
 its own receiver system, so a common interface is required that the
 organization’s development team can implement to integrate with their
>>> audit
 system.
 
 Best wishes, Vladimir
 
 Hello, Igor.
> 

Re: [DISCUSSION] Upgrade system tests to python 3

2020-09-03 Thread Nikolay Izhikov
Hello! 

Just a friendly reminder.

Patch to resolve some kind of technical debt - python2 in system tests is ready!
Can someone, please, take a look?

https://github.com/apache/kafka/pull/9196

> 28 авг. 2020 г., в 11:19, Nikolay Izhikov  написал(а):
> 
> Hello!
> 
> Any feedback on this?
> What I should additionally do to prepare system tests migration?
> 
>> 24 авг. 2020 г., в 11:17, Nikolay Izhikov  
>> написал(а):
>> 
>> Hello.
>> 
>> PR [1] is ready.
>> Please, review.
>> 
>> But, I need help with the two following questions:
>> 
>> 1. We need a new release of ducktape which includes fixes [2], [3] for 
>> python3.
>> I created the issue in ducktape repo [4].
>> Can someone help me with the release?
>> 
>> 2. I know that some companies run system tests for the trunk on a regular 
>> bases.
>> Can someone show me some results of these runs?
>> So, I can compare failures in my PR and in the trunk.
>> 
>> Results [5] of run all for my PR available in the ticket [6]
>> 
>> ```
>> SESSION REPORT (ALL TESTS)
>> ducktape version: 0.8.0
>> session_id:   2020-08-23--002
>> run time: 1010 minutes 46.483 seconds
>> tests run:684
>> passed:   505
>> failed:   9
>> ignored:  170
>> ```
>> 
>> [1] https://github.com/apache/kafka/pull/9196
>> [2] 
>> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
>> [3] 
>> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
>> [4] https://github.com/confluentinc/ducktape/issues/245
>> [5] https://issues.apache.org/jira/secure/attachment/13010366/report.txt
>> [6] https://issues.apache.org/jira/browse/KAFKA-10402
>> 
>>> 14 авг. 2020 г., в 21:26, Ismael Juma  написал(а):
>>> 
>>> +1
>>> 
>>> On Fri, Aug 14, 2020 at 7:42 AM John Roesler  wrote:
>>> 
>>>> Thanks Nikolay,
>>>> 
>>>> No objection. This would be very nice to have.
>>>> 
>>>> Thanks,
>>>> John
>>>> 
>>>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>>>>> Hello.
>>>>> 
>>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>>> change.
>>>>> 
>>>>> I’ve created a ticket [1] to upgrade system tests to python3.
>>>>> Does someone have any additional inputs or objections for this change?
>>>>> 
>>>>> [1] https://issues.apache.org/jira/browse/KAFKA-10402
>>>>> 
>>>>> 
>>>>>> 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian <
>>>> gokul24...@gmail.com> написал(а):
>>>>>> 
>>>>>> Thanks Colin.
>>>>>> 
>>>>>> While at the subject of system tests, there are a few times I see tests
>>>>>> timed out (even on a large machine such as m5.4xlarge EC2 with Linux).
>>>> Are
>>>>>> there any knobs that system tests provide to control timeouts /
>>>> throughputs
>>>>>> across all tests?
>>>>>> Thanks.
>>>>>> 
>>>>>> On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe 
>>>> wrote:
>>>>>> 
>>>>>>> Ducktape runs on Python 2.  You can't use it with Python 3, as you are
>>>>>>> trying to do here.
>>>>>>> 
>>>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>>> change.
>>>>>>> 
>>>>>>> Otherwise, using docker as suggested here seems to be the best way to
>>>> go.
>>>>>>> 
>>>>>>> best,
>>>>>>> Colin
>>>>>>> 
>>>>>>> On Mon, Jun 29, 2020, at 02:14, Gokul Ramanan Subramanian wrote:
>>>>>>>> Hi.
>>>>>>>> 
>>>>>>>> Has anyone had luck running Kafka system tests on a Mac. I have a
>>>> MacOS
>>>>>>>> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
>>>>>>>> *ducktape tests/kafkatest/tests* yields the following error, making
>>>> it
>>>>>>> look
>>>>>>>> like some Python incompatibility issue.
>>>>>>>> 
>>>>>>>> $ ducktape tests/kafk

Re: [DISCUSSION] Upgrade system tests to python 3

2020-08-28 Thread Nikolay Izhikov
Hello!

Any feedback on this?
What I should additionally do to prepare system tests migration?

> 24 авг. 2020 г., в 11:17, Nikolay Izhikov  написал(а):
> 
> Hello.
> 
> PR [1] is ready.
> Please, review.
> 
> But, I need help with the two following questions:
> 
> 1. We need a new release of ducktape which includes fixes [2], [3] for 
> python3.
> I created the issue in ducktape repo [4].
> Can someone help me with the release?
> 
> 2. I know that some companies run system tests for the trunk on a regular 
> bases.
> Can someone show me some results of these runs?
> So, I can compare failures in my PR and in the trunk.
> 
> Results [5] of run all for my PR available in the ticket [6]
> 
> ```
> SESSION REPORT (ALL TESTS)
> ducktape version: 0.8.0
> session_id:   2020-08-23--002
> run time: 1010 minutes 46.483 seconds
> tests run:684
> passed:   505
> failed:   9
> ignored:  170
> ```
> 
> [1] https://github.com/apache/kafka/pull/9196
> [2] 
> https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
> [3] 
> https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
> [4] https://github.com/confluentinc/ducktape/issues/245
> [5] https://issues.apache.org/jira/secure/attachment/13010366/report.txt
> [6] https://issues.apache.org/jira/browse/KAFKA-10402
> 
>> 14 авг. 2020 г., в 21:26, Ismael Juma  написал(а):
>> 
>> +1
>> 
>> On Fri, Aug 14, 2020 at 7:42 AM John Roesler  wrote:
>> 
>>> Thanks Nikolay,
>>> 
>>> No objection. This would be very nice to have.
>>> 
>>> Thanks,
>>> John
>>> 
>>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>>>> Hello.
>>>> 
>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>> change.
>>>> 
>>>> I’ve created a ticket [1] to upgrade system tests to python3.
>>>> Does someone have any additional inputs or objections for this change?
>>>> 
>>>> [1] https://issues.apache.org/jira/browse/KAFKA-10402
>>>> 
>>>> 
>>>>> 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian <
>>> gokul24...@gmail.com> написал(а):
>>>>> 
>>>>> Thanks Colin.
>>>>> 
>>>>> While at the subject of system tests, there are a few times I see tests
>>>>> timed out (even on a large machine such as m5.4xlarge EC2 with Linux).
>>> Are
>>>>> there any knobs that system tests provide to control timeouts /
>>> throughputs
>>>>> across all tests?
>>>>> Thanks.
>>>>> 
>>>>> On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe 
>>> wrote:
>>>>> 
>>>>>> Ducktape runs on Python 2.  You can't use it with Python 3, as you are
>>>>>> trying to do here.
>>>>>> 
>>>>>> If anyone's interested in porting it to Python 3 it would be a good
>>> change.
>>>>>> 
>>>>>> Otherwise, using docker as suggested here seems to be the best way to
>>> go.
>>>>>> 
>>>>>> best,
>>>>>> Colin
>>>>>> 
>>>>>> On Mon, Jun 29, 2020, at 02:14, Gokul Ramanan Subramanian wrote:
>>>>>>> Hi.
>>>>>>> 
>>>>>>> Has anyone had luck running Kafka system tests on a Mac. I have a
>>> MacOS
>>>>>>> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
>>>>>>> *ducktape tests/kafkatest/tests* yields the following error, making
>>> it
>>>>>> look
>>>>>>> like some Python incompatibility issue.
>>>>>>> 
>>>>>>> $ ducktape tests/kafkatest/tests
>>>>>>> Traceback (most recent call last):
>>>>>>> File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11,
>>> in
>>>>>>> 
>>>>>>>  load_entry_point('ducktape', 'console_scripts', 'ducktape')()
>>>>>>> File
>>>>>>> 
>>>>>> 
>>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>>>>> line 487, in load_entry_point
>>>>>>>  return get_distribution(dist).load_entry_point(group, name)
>>>>>>> File
>>

Re: [DISCUSSION] Upgrade system tests to python 3

2020-08-24 Thread Nikolay Izhikov
Hello.

PR [1] is ready.
Please, review.

But, I need help with the two following questions:

1. We need a new release of ducktape which includes fixes [2], [3] for python3.
I created the issue in ducktape repo [4].
Can someone help me with the release?

2. I know that some companies run system tests for the trunk on a regular bases.
Can someone show me some results of these runs?
So, I can compare failures in my PR and in the trunk.

Results [5] of run all for my PR available in the ticket [6]

```
SESSION REPORT (ALL TESTS)
ducktape version: 0.8.0
session_id:   2020-08-23--002
run time: 1010 minutes 46.483 seconds
tests run:684
passed:   505
failed:   9
ignored:  170
```

[1] https://github.com/apache/kafka/pull/9196
[2] 
https://github.com/confluentinc/ducktape/commit/23bd5ab53802e3a1e1da1ddf3630934f33b02305
[3] 
https://github.com/confluentinc/ducktape/commit/bfe53712f83b025832d29a43cde3de3d7803106f
[4] https://github.com/confluentinc/ducktape/issues/245
[5] https://issues.apache.org/jira/secure/attachment/13010366/report.txt
[6] https://issues.apache.org/jira/browse/KAFKA-10402

> 14 авг. 2020 г., в 21:26, Ismael Juma  написал(а):
> 
> +1
> 
> On Fri, Aug 14, 2020 at 7:42 AM John Roesler  wrote:
> 
>> Thanks Nikolay,
>> 
>> No objection. This would be very nice to have.
>> 
>> Thanks,
>> John
>> 
>> On Fri, Aug 14, 2020, at 09:18, Nikolay Izhikov wrote:
>>> Hello.
>>> 
>>>> If anyone's interested in porting it to Python 3 it would be a good
>> change.
>>> 
>>> I’ve created a ticket [1] to upgrade system tests to python3.
>>> Does someone have any additional inputs or objections for this change?
>>> 
>>> [1] https://issues.apache.org/jira/browse/KAFKA-10402
>>> 
>>> 
>>>> 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian <
>> gokul24...@gmail.com> написал(а):
>>>> 
>>>> Thanks Colin.
>>>> 
>>>> While at the subject of system tests, there are a few times I see tests
>>>> timed out (even on a large machine such as m5.4xlarge EC2 with Linux).
>> Are
>>>> there any knobs that system tests provide to control timeouts /
>> throughputs
>>>> across all tests?
>>>> Thanks.
>>>> 
>>>> On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe 
>> wrote:
>>>> 
>>>>> Ducktape runs on Python 2.  You can't use it with Python 3, as you are
>>>>> trying to do here.
>>>>> 
>>>>> If anyone's interested in porting it to Python 3 it would be a good
>> change.
>>>>> 
>>>>> Otherwise, using docker as suggested here seems to be the best way to
>> go.
>>>>> 
>>>>> best,
>>>>> Colin
>>>>> 
>>>>> On Mon, Jun 29, 2020, at 02:14, Gokul Ramanan Subramanian wrote:
>>>>>> Hi.
>>>>>> 
>>>>>> Has anyone had luck running Kafka system tests on a Mac. I have a
>> MacOS
>>>>>> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
>>>>>> *ducktape tests/kafkatest/tests* yields the following error, making
>> it
>>>>> look
>>>>>> like some Python incompatibility issue.
>>>>>> 
>>>>>> $ ducktape tests/kafkatest/tests
>>>>>> Traceback (most recent call last):
>>>>>> File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11,
>> in
>>>>>> 
>>>>>>   load_entry_point('ducktape', 'console_scripts', 'ducktape')()
>>>>>> File
>>>>>> 
>>>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>>>> line 487, in load_entry_point
>>>>>>   return get_distribution(dist).load_entry_point(group, name)
>>>>>> File
>>>>>> 
>>>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>>>> line 2728, in load_entry_point
>>>>>>   return ep.load()
>>>>>> File
>>>>>> 
>>>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>>>> line 2346, in load
>>>>>>   return self.resolve()
>>>>>> File
>>>>>> 
>>>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>>>> line 2352, in resolve
>>>>>>   module = __import__(self.module_name, fromlist=['__name__'],
>>>>>> level=0)
>>>>>> File
>>>>>> 
>>>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/ducktape-0.7.6-py3.6.egg/ducktape/command_line/main.py",
>>>>>> line 127
>>>>>>   print "parameters are not valid json: " + str(e.message)
>>>>>> ^
>>>>>> SyntaxError: invalid syntax
>>>>>> 
>>>>>> I followed the instructions in tests/README.md to setup a cluster of
>> 9
>>>>>> worker machines. That worked well. When I ran *python setup.py
>> develop*
>>>>> to
>>>>>> install the necessary dependencies (including ducktape), I got
>> similar
>>>>>> errors to above, but the overall command completed successfully.
>>>>>> 
>>>>>> Any help appreciated.
>>>>>> 
>>>>>> Thanks.
>>>>>> 
>>>>> 
>>> 
>>> 
>> 



[DISCUSSION] Upgrade system tests to python 3

2020-08-14 Thread Nikolay Izhikov
Hello.

> If anyone's interested in porting it to Python 3 it would be a good change.

I’ve created a ticket [1] to upgrade system tests to python3.
Does someone have any additional inputs or objections for this change?

[1] https://issues.apache.org/jira/browse/KAFKA-10402


> 1 июля 2020 г., в 00:26, Gokul Ramanan Subramanian  
> написал(а):
> 
> Thanks Colin.
> 
> While at the subject of system tests, there are a few times I see tests
> timed out (even on a large machine such as m5.4xlarge EC2 with Linux). Are
> there any knobs that system tests provide to control timeouts / throughputs
> across all tests?
> Thanks.
> 
> On Tue, Jun 30, 2020 at 6:32 PM Colin McCabe  wrote:
> 
>> Ducktape runs on Python 2.  You can't use it with Python 3, as you are
>> trying to do here.
>> 
>> If anyone's interested in porting it to Python 3 it would be a good change.
>> 
>> Otherwise, using docker as suggested here seems to be the best way to go.
>> 
>> best,
>> Colin
>> 
>> On Mon, Jun 29, 2020, at 02:14, Gokul Ramanan Subramanian wrote:
>>> Hi.
>>> 
>>> Has anyone had luck running Kafka system tests on a Mac. I have a MacOS
>>> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
>>> *ducktape tests/kafkatest/tests* yields the following error, making it
>> look
>>> like some Python incompatibility issue.
>>> 
>>> $ ducktape tests/kafkatest/tests
>>> Traceback (most recent call last):
>>>  File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11, in
>>> 
>>>load_entry_point('ducktape', 'console_scripts', 'ducktape')()
>>>  File
>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>> line 487, in load_entry_point
>>>return get_distribution(dist).load_entry_point(group, name)
>>>  File
>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>> line 2728, in load_entry_point
>>>return ep.load()
>>>  File
>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>> line 2346, in load
>>>return self.resolve()
>>>  File
>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>> line 2352, in resolve
>>>module = __import__(self.module_name, fromlist=['__name__'],
>>> level=0)
>>>  File
>>> 
>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/ducktape-0.7.6-py3.6.egg/ducktape/command_line/main.py",
>>> line 127
>>>print "parameters are not valid json: " + str(e.message)
>>>  ^
>>> SyntaxError: invalid syntax
>>> 
>>> I followed the instructions in tests/README.md to setup a cluster of 9
>>> worker machines. That worked well. When I ran *python setup.py develop*
>> to
>>> install the necessary dependencies (including ducktape), I got similar
>>> errors to above, but the overall command completed successfully.
>>> 
>>> Any help appreciated.
>>> 
>>> Thanks.
>>> 
>> 



[jira] [Created] (KAFKA-10402) Upgrade python version in system tests

2020-08-14 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-10402:
---

 Summary: Upgrade python version in system tests
 Key: KAFKA-10402
 URL: https://issues.apache.org/jira/browse/KAFKA-10402
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


Currently, system tests using python 2 which is outdated and not supported.

Since all dependency of system tests including ducktape supporting python 3 we 
can migrate system tests to python3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Running system tests on mac

2020-06-29 Thread Nikolay Izhikov
Hello.

> Is this common? At this rate, the tests would take about 2 days to complete 
> and there'd probably be lots of failures.

Yes.
System tests are heavy and resource expensive.
You can run a single test with the TC_PATH environment variable.

```
TC_PATHS="tests/kafkatest/tests/client/pluggable_test.py" bash 
tests/docker/run_tests.sh
```

As you may know, there is no infrastructure to run system tests on public 
servers.
I’m usually using some external machine from cloud providers to check system 
tests for my PR’s.


> 29 июня 2020 г., в 17:57, Gokul Ramanan Subramanian  
> написал(а):
> 
> I have been running the tests for over 5 hours now, and only 60 or so out
> of the 600 tests are through. Many tests are timing out. For example:
> 
> [INFO:2020-06-29 07:52:54,606]: RunnerClient:
> kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test.StreamsCooperativeRebalanceUpgradeTest.test_upgrade_to_cooperative_rebalance.upgrade_from_version=2.3.1:
> FAIL: Never saw 'Processed [0-9]* records so far' message ducker@ducker07
> Traceback (most recent call last):
>  File
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py",
> line 132, in run
>data = self.run_test()
>  File
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py",
> line 189, in run_test
>return self.test_context.function(self.test)
>  File "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py",
> line 428, in wrapper
>return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>  File
> "/opt/kafka-dev/tests/kafkatest/tests/streams/streams_cooperative_rebalance_upgrade_test.py",
> line 90, in test_upgrade_to_cooperative_rebalance
>verify_running(processor, self.processing_message)
>  File "/opt/kafka-dev/tests/kafkatest/tests/streams/utils/util.py", line
> 21, in verify_running
>err_msg="Never saw '%s' message " % message +
> str(processor.node.account))
>  File
> "/usr/local/lib/python2.7/dist-packages/ducktape/cluster/remoteaccount.py",
> line 705, in wait_until
>allow_fail=True) == 0, **kwargs)
>  File "/usr/local/lib/python2.7/dist-packages/ducktape/utils/util.py",
> line 41, in wait_until
>raise TimeoutError(err_msg() if callable(err_msg) else err_msg)
> TimeoutError: Never saw 'Processed [0-9]* records so far' message
> ducker@ducker07
> 
> My Macbook pro has over 4GB of RAM free and between 15-45 % idle CPU. So, I
> don't think I am running low on resources.
> 
> Is this common? At this rate, the tests would take about 2 days to complete
> and there'd probably be lots of failures.
> 
> On Mon, Jun 29, 2020 at 11:21 AM Gokul Ramanan Subramanian <
> gokul24...@gmail.com> wrote:
> 
>> Thanks. This worked. I had to use the Docker desktop app, not
>> docker-machine, with which it gave errors around permissions setting up
>> /etc/hosts.
>> 
>> On Mon, Jun 29, 2020 at 10:24 AM Nikolay Izhikov 
>> wrote:
>> 
>>> Hello,
>>> 
>>> I successfully run system tests on Mac with Docker.
>>> I followed the instruction on [1] and it works like a charm.
>>> 
>>> [1]
>>> https://github.com/apache/kafka/tree/trunk/tests#running-tests-using-docker
>>> 
>>> 
>>>> 29 июня 2020 г., в 12:14, Gokul Ramanan Subramanian <
>>> gokul24...@gmail.com> написал(а):
>>>> 
>>>> Hi.
>>>> 
>>>> Has anyone had luck running Kafka system tests on a Mac. I have a MacOS
>>>> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
>>>> *ducktape tests/kafkatest/tests* yields the following error, making it
>>> look
>>>> like some Python incompatibility issue.
>>>> 
>>>> $ ducktape tests/kafkatest/tests
>>>> Traceback (most recent call last):
>>>> File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11, in
>>>> 
>>>>   load_entry_point('ducktape', 'console_scripts', 'ducktape')()
>>>> File
>>>> 
>>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>> line 487, in load_entry_point
>>>>   return get_distribution(dist).load_entry_point(group, name)
>>>> File
>>>> 
>>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
>>>> line 2728, in load_entry_point
>>>>   return ep.load()
>>>> File
>>>> 
>>> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/pyt

Re: Running system tests on mac

2020-06-29 Thread Nikolay Izhikov
Hello, 

I successfully run system tests on Mac with Docker.
I followed the instruction on [1] and it works like a charm.

[1] https://github.com/apache/kafka/tree/trunk/tests#running-tests-using-docker


> 29 июня 2020 г., в 12:14, Gokul Ramanan Subramanian  
> написал(а):
> 
> Hi.
> 
> Has anyone had luck running Kafka system tests on a Mac. I have a MacOS
> Mojave 10.14.6. I got Python 3.6.9 using pyenv. However, the command
> *ducktape tests/kafkatest/tests* yields the following error, making it look
> like some Python incompatibility issue.
> 
> $ ducktape tests/kafkatest/tests
> Traceback (most recent call last):
>  File "/Users/gokusubr/.pyenv/versions/3.6.9/bin/ducktape", line 11, in
> 
>load_entry_point('ducktape', 'console_scripts', 'ducktape')()
>  File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 487, in load_entry_point
>return get_distribution(dist).load_entry_point(group, name)
>  File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 2728, in load_entry_point
>return ep.load()
>  File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 2346, in load
>return self.resolve()
>  File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/pkg_resources/__init__.py",
> line 2352, in resolve
>module = __import__(self.module_name, fromlist=['__name__'], level=0)
>  File
> "/Users/gokusubr/.pyenv/versions/3.6.9/lib/python3.6/site-packages/ducktape-0.7.6-py3.6.egg/ducktape/command_line/main.py",
> line 127
>print "parameters are not valid json: " + str(e.message)
>  ^
> SyntaxError: invalid syntax
> 
> I followed the instructions in tests/README.md to setup a cluster of 9
> worker machines. That worked well. When I ran *python setup.py develop* to
> install the necessary dependencies (including ducktape), I got similar
> errors to above, but the overall command completed successfully.
> 
> Any help appreciated.
> 
> Thanks.



[jira] [Resolved] (KAFKA-9320) Enable TLSv1.3 by default and disable some of the older protocols

2020-06-03 Thread Nikolay Izhikov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Izhikov resolved KAFKA-9320.

Resolution: Fixed

Fixed with the 
https://github.com/apache/kafka/commit/8b22b8159673bfe22d8ac5dcd4e4312d4f2c863c

> Enable TLSv1.3 by default and disable some of the older protocols
> -
>
> Key: KAFKA-9320
> URL: https://issues.apache.org/jira/browse/KAFKA-9320
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Rajini Sivaram
>    Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: needs-kip
> Attachments: report.txt
>
>
> KAFKA-7251 added support for TLSv1.3. We should include this in the list of 
> protocols that are enabled by default. We should also disable some of the 
> older protocols that are not secure. This change requires a KIP.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10050) kafka_log4j_appender.py broken on JDK11

2020-05-27 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-10050:
---

 Summary: kafka_log4j_appender.py broken on JDK11
 Key: KAFKA-10050
 URL: https://issues.apache.org/jira/browse/KAFKA-10050
 Project: Kafka
  Issue Type: Bug
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


kafka_log4j_appender.py brokern on jdk11

{noformat}
[INFO:2020-05-27 02:31:27,662]: RunnerClient: 
kafkatest.tests.tools.log4j_appender_test.Log4jAppenderTest.test_log4j_appender.security_protocol=SSL:
 Data: None

SESSION REPORT (ALL TESTS)
ducktape version: 0.7.7
session_id:   2020-05-27--002
run time: 1 minute 41.177 seconds
tests run:4
passed:   0
failed:   4
ignored:  0

test_id:
kafkatest.tests.tools.log4j_appender_test.Log4jAppenderTest.test_log4j_appender.security_protocol=SASL_PLAINTEXT
status: FAIL
run time:   27.509 seconds


KafkaLog4jAppender-0-140270269628496-worker-1: Traceback (most recent call 
last):
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
 line 36, in _protected_worker
self._worker(idx, node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka_log4j_appender.py", line 
42, in _worker
cmd = self.start_cmd(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka_log4j_appender.py", line 
48, in start_cmd
cmd = fix_opts_for_new_jvm(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/util.py", line 36, in 
fix_opts_for_new_jvm
if node.version == LATEST_0_8_2 or node.version == LATEST_0_9 or 
node.version == LATEST_0_10_0 or node.version == LATEST_0_10_1 or node.version 
== LATEST_0_10_2 or node.version == LATEST_0_11_0 or node.version == LATEST_1_0:
AttributeError: 'ClusterNode' object has no attribute 'version'

Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", line 
132, in run
data = self.run_test()
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", line 
189, in run_test
return self.test_context.function(self.test)
  File "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 
428, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/opt/kafka-dev/tests/kafkatest/tests/tools/log4j_appender_test.py", 
line 84, in test_log4j_appender
self.appender.wait()
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
 line 72, in wait
self._propagate_exceptions()
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
 line 98, in _propagate_exceptions
raise Exception(self.errors)
Exception: KafkaLog4jAppender-0-140270269628496-worker-1: Traceback (most 
recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
 line 36, in _protected_worker
self._worker(idx, node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka_log4j_appender.py", line 
42, in _worker
cmd = self.start_cmd(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka_log4j_appender.py", line 
48, in start_cmd
cmd = fix_opts_for_new_jvm(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/util.py", line 36, in 
fix_opts_for_new_jvm
if node.version == LATEST_0_8_2 or node.version == LATEST_0_9 or 
node.version == LATEST_0_10_0 or node.version == LATEST_0_10_1 or node.version 
== LATEST_0_10_2 or node.version == LATEST_0_11_0 or node.version == LATEST_1_0:
AttributeError: 'ClusterNode' object has no attribute 'version'



test_id:
kafkatest.tests.tools.log4j_appender_test.Log4jAppenderTest.test_log4j_appender.security_protocol=SASL_SSL
status: FAIL
run time:   28.121 seconds


KafkaLog4jAppender-0-140270269498000-worker-1: Traceback (most recent call 
last):
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/services/background_thread.py",
 line 36, in _protected_worker
self._worker(idx, node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka_log4j_appender.py", line 
42, in _worker
cmd = self.start_cmd(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka_log4j_appender.py", line 
48, in start_cmd
cmd = fix_opts_for_new_jvm(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/util.py", line 36, in 
fix_opts_for_new_jvm
if node.version == LATEST_0_8_2 or node.version == LATEST_0_9 or 
node.version == LATEST_0_10_0 or node.version == LATEST_0_10_1 or node.ve

Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Nikolay Izhikov
Ismael, thanks for the clarification.

I updated the KIP according to your proposal.

> 21 мая 2020 г., в 17:06, Ismael Juma  написал(а):
> 
> Given what we've seen in the test, it would be good to mention that TLS 1.3
> will not work for users who have configured ciphers explicitly. If such
> users want to use TLS 1.3, they will have to update the list of ciphers to
> include TLS 1.3 ciphers (which use a different naming convention). TLS 1.2
> will continue to work as usual, so there is no compatibility issue.
> 
> Ismael
> 
> On Tue, May 19, 2020 at 12:19 PM Nikolay Izhikov 
> wrote:
> 
>> PR - https://github.com/apache/kafka/pull/8695
>> 
>>> 18 мая 2020 г., в 23:30, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello, Colin
>>> 
>>> We need hack only because TLSv1.3 not supported in java8.
>>> 
>>>> Java 8 will receive TLS 1.3 support later this year (
>> https://java.com/en/jre-jdk-cryptoroadmap.html)
>>> 
>>> We can
>>> 
>>> 1. Enable TLSv1.3 for java11 for now. And after java8 get TLSv1.3
>> support remove it.
>>> 2. Or we can wait and enable it after java8 update.
>>> 
>>> What do you think?
>>> 
>>>> 18 мая 2020 г., в 22:51, Ismael Juma  написал(а):
>>>> 
>>>> Yeah, agreed. One option is to actually only change this in Apache Kafka
>>>> 3.0 and avoid the hack altogether. We could make TLS 1.3 the default and
>>>> have 1.2 as one of the enabled protocols.
>>>> 
>>>> Ismael
>>>> 
>>>> On Mon, May 18, 2020 at 12:24 PM Colin McCabe 
>> wrote:
>>>> 
>>>>> Hmm.  It would be good to figure out if we are going to remove this
>>>>> compatibility hack in the next major release of Kafka?  In other
>> words, in
>>>>> Kafka 3.0, will we enable TLS 1.3 by default even if the cipher suite
>> is
>>>>> specified?
>>>>> 
>>>>> best,
>>>>> Colin
>>>>> 
>>>>> 
>>>>> On Mon, May 18, 2020, at 09:26, Ismael Juma wrote:
>>>>>> Sounds good.
>>>>>> 
>>>>>> Ismael
>>>>>> 
>>>>>> 
>>>>>> On Mon, May 18, 2020, 9:03 AM Nikolay Izhikov 
>>>>> wrote:
>>>>>> 
>>>>>>>> A safer approach may be to only add TLS 1.3 to the list if the
>> cipher
>>>>>>> suite config has not been specified.
>>>>>>>> So, if TLS 1.3 is added to the list by Kafka, it would seem that it
>>>>>>> would not work if the user specified a list of cipher suites for
>>>>> previous
>>>>>>> TLS versions
>>>>>>> 
>>>>>>> Let’s just add test for this case?
>>>>>>> I can prepare the preliminary PR for this KIP and add this kind of
>>>>> test to
>>>>>>> it.
>>>>>>> 
>>>>>>> What do you think?
>>>>>>> 
>>>>>>> 
>>>>>>>> 18 мая 2020 г., в 18:59, Nikolay Izhikov 
>>>>>>> написал(а):
>>>>>>>> 
>>>>>>>>> 1. I meant that `ssl.protocol` is TLSv1.2 while
>>>>> `ssl.enabled.protocols`
>>>>>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact
>>>>>>>> 
>>>>>>>> `ssl.protocol` is what will be used, by default, in this KIP is
>> stays
>>>>>>> unchanged (TLSv1.2) Please, see [1]
>>>>>>>> `ssl.enabled.protocols` is list of protocols that  *can* be used.
>>>>> This
>>>>>>> value is just passed to the `SSLEngine` implementation.
>>>>>>>> Please, see DefaultSslEngineFactory#createSslEngine [2]
>>>>>>>> 
>>>>>>>>> 2. My question is not about obsolete protocols, it is about people
>>>>>>> using TLS 1.2 with specified cipher suites. How will that behave when
>>>>> TLS
>>>>>>> 1.3 is enabled by default?
>>>>>>>> 
>>>>>>>> They don’t change anything and all just work as expected on java11.
>>>>>>>> 
>>>>>>>>> 3. An additional question is how does this impact Java 8 users?
>>>>>>>> 
>>>>>>>> Yes.
>>>>>>&

Re: [VOTE] KIP-573: Enable TLSv1.3 by default

2020-05-21 Thread Nikolay Izhikov
Thanks everyone!

After 3+ business days since this thread started, I'm concluding the vote
on KIP-573.

The KIP has passed with:

3 binding votes from Ismael Juma, Rajini Sivaram, Manikumar.

Thank you all for voting!

> 21 мая 2020 г., в 19:50, Ismael Juma  написал(а):
> 
> Nikolay, you have enough votes and 72 hours have passed, so you can close
> this vote as successful whenever you're ready.
> 
> Ismael
> 
> On Mon, Mar 2, 2020 at 10:55 AM Nikolay Izhikov  wrote:
> 
>> Hello.
>> 
>> I would like to start vote for KIP-573: Enable TLSv1.3 by default
>> 
>> KIP -
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
>> Discussion thread -
>> https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E



Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-05-20 Thread Nikolay Izhikov
Hello, Ismael.

> What I meant to ask is if we changed the configuration so that TLS 1.3 is 
> exercised in the system tests by default.

Are you suggesting just use TLSv1.3 instead of TLSv1.2 if the new version 
supported?
Or you suggest introducing one more parameter for applicable tests like 
`ssl_protocol_version=[TLSv1.2, TLSv1.3]` ?

The second option enlarges the number of test cases twice so it will be run 
slower.

> 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
> 
> Right, some companies run them nightly. What I meant to ask is if we
> changed the configuration so that TLS 1.3 is exercised in the system tests
> by default.
> 
> Ismael
> 
> On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov  wrote:
> 
>> Hello, Ismael.
>> 
>> AFAIK we don’t run system tests nightly.
>> Do we have resources to run system tests periodically?
>> 
>> When I did the testing I used servers my employer gave me.
>> 
>>> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> Seems like we have been able to run the system tests with TLS 1.3. Do we
>>> run them nightly?
>>> 
>>> Ismael
>>> 
>>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Hello, Kafka team.
>>>> 
>>>> I ran system tests that use SSL for the TLSv1.3.
>>>> You can find the results of the tests in the Jira ticket [1], [2], [3],
>>>> [4].
>>>> 
>>>> I also, need a changes [5] in `security_config.py` to execute system
>> tests
>>>> with TLSv1.3(more info in PR description).
>>>> Please, take a look.
>>>> 
>>>> Test environment:
>>>>   • openjdk11
>>>>   • trunk + changes from my PR [5].
>>>> 
>>>> Full system tests results have volume 15gb.
>>>> Should I share full logs with you?
>>>> 
>>>> What else should be done before we can enable TLSv1.3 by default?
>>>> 
>>>> [1]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
>>>> 
>>>> [2]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
>>>> 
>>>> [3]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
>>>> 
>>>> [4]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
>>>> 
>>>> [5]
>>>> 
>> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
>>>> 
>>>>> 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
>>>> написал(а):
>>>>> 
>>>>> Hello, Rajini.
>>>>> 
>>>>> Thanks for the feedback.
>>>>> 
>>>>> I’ve searched tests by the «ssl» keyword and found the following tests:
>>>>> 
>>>>> ./test/kafkatest/services/kafka_log4j_appender.py
>>>>> ./test/kafkatest/services/listener_security_config.py
>>>>> ./test/kafkatest/services/security/security_config.py
>>>>> ./test/kafkatest/tests/core/security_test.py
>>>>> 
>>>>> Is this all tests that need to be run with the TLSv1.3 to ensure we can
>>>> enable it by default?
>>>>> 
>>>>>> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
>>>> написал(а):
>>>>>> 
>>>>>> Hi Nikolay,
>>>>>> 
>>>>>> Not sure of the total space required. But you can run a collection of
>>>> tests at a time instead of running them all together. That way, you
>> could
>>>> just run all the tests that enable SSL. Details of running a subset of
>>>> tests are in the README in tests.
>>>>>> 
>>>>>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov 
>>>> wrote:
>>>>>> Hello, Rajini.
>>>>>> 
>>>>>> I’m tried to run all system tests but failed for now.
>>>>>> It happens, that system tests generates a lot of logs.
>>>>>>

Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-19 Thread Nikolay Izhikov
PR - https://github.com/apache/kafka/pull/8695

> 18 мая 2020 г., в 23:30, Nikolay Izhikov  написал(а):
> 
> Hello, Colin
> 
> We need hack only because TLSv1.3 not supported in java8.
> 
>> Java 8 will receive TLS 1.3 support later this year 
>> (https://java.com/en/jre-jdk-cryptoroadmap.html)
> 
> We can 
> 
> 1. Enable TLSv1.3 for java11 for now. And after java8 get TLSv1.3 support 
> remove it.
> 2. Or we can wait and enable it after java8 update.
> 
> What do you think?
> 
>> 18 мая 2020 г., в 22:51, Ismael Juma  написал(а):
>> 
>> Yeah, agreed. One option is to actually only change this in Apache Kafka
>> 3.0 and avoid the hack altogether. We could make TLS 1.3 the default and
>> have 1.2 as one of the enabled protocols.
>> 
>> Ismael
>> 
>> On Mon, May 18, 2020 at 12:24 PM Colin McCabe  wrote:
>> 
>>> Hmm.  It would be good to figure out if we are going to remove this
>>> compatibility hack in the next major release of Kafka?  In other words, in
>>> Kafka 3.0, will we enable TLS 1.3 by default even if the cipher suite is
>>> specified?
>>> 
>>> best,
>>> Colin
>>> 
>>> 
>>> On Mon, May 18, 2020, at 09:26, Ismael Juma wrote:
>>>> Sounds good.
>>>> 
>>>> Ismael
>>>> 
>>>> 
>>>> On Mon, May 18, 2020, 9:03 AM Nikolay Izhikov 
>>> wrote:
>>>> 
>>>>>> A safer approach may be to only add TLS 1.3 to the list if the cipher
>>>>> suite config has not been specified.
>>>>>> So, if TLS 1.3 is added to the list by Kafka, it would seem that it
>>>>> would not work if the user specified a list of cipher suites for
>>> previous
>>>>> TLS versions
>>>>> 
>>>>> Let’s just add test for this case?
>>>>> I can prepare the preliminary PR for this KIP and add this kind of
>>> test to
>>>>> it.
>>>>> 
>>>>> What do you think?
>>>>> 
>>>>> 
>>>>>> 18 мая 2020 г., в 18:59, Nikolay Izhikov 
>>>>> написал(а):
>>>>>> 
>>>>>>> 1. I meant that `ssl.protocol` is TLSv1.2 while
>>> `ssl.enabled.protocols`
>>>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact
>>>>>> 
>>>>>> `ssl.protocol` is what will be used, by default, in this KIP is stays
>>>>> unchanged (TLSv1.2) Please, see [1]
>>>>>> `ssl.enabled.protocols` is list of protocols that  *can* be used.
>>> This
>>>>> value is just passed to the `SSLEngine` implementation.
>>>>>> Please, see DefaultSslEngineFactory#createSslEngine [2]
>>>>>> 
>>>>>>> 2. My question is not about obsolete protocols, it is about people
>>>>> using TLS 1.2 with specified cipher suites. How will that behave when
>>> TLS
>>>>> 1.3 is enabled by default?
>>>>>> 
>>>>>> They don’t change anything and all just work as expected on java11.
>>>>>> 
>>>>>>> 3. An additional question is how does this impact Java 8 users?
>>>>>> 
>>>>>> Yes.
>>>>>> If SSLEngine doesn’t support TLSv1.3 then java8 users should
>>> explicitly
>>>>> modify `ssl.enabled.protocols` and set it to `TLSv1.2`.
>>>>>> 
>>>>>> [1]
>>>>> 
>>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L218
>>>>>> [2]
>>>>> 
>>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L164
>>>>>> 
>>>>>>> 18 мая 2020 г., в 17:34, Ismael Juma 
>>> написал(а):
>>>>>>> 
>>>>>>> Nikolay,
>>>>>>> 
>>>>>>> Thanks for the comments. More below:
>>>>>>> 
>>>>>>> 1. I meant that `ssl.protocol` is TLSv1.2 while
>>> `ssl.enabled.protocols`
>>>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact?
>>>>>>> 2. My question is not about obsolete protocols, it is about people
>>>>> using TLS 1.2 with specified cipher suites. How will that behave when
>>> TLS
>>>>> 1.3 is enabled by default?
>>>>>

Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-18 Thread Nikolay Izhikov
Hello, Colin

We need hack only because TLSv1.3 not supported in java8.

>  Java 8 will receive TLS 1.3 support later this year 
> (https://java.com/en/jre-jdk-cryptoroadmap.html)

We can 

1. Enable TLSv1.3 for java11 for now. And after java8 get TLSv1.3 support 
remove it.
2. Or we can wait and enable it after java8 update.

What do you think?

> 18 мая 2020 г., в 22:51, Ismael Juma  написал(а):
> 
> Yeah, agreed. One option is to actually only change this in Apache Kafka
> 3.0 and avoid the hack altogether. We could make TLS 1.3 the default and
> have 1.2 as one of the enabled protocols.
> 
> Ismael
> 
> On Mon, May 18, 2020 at 12:24 PM Colin McCabe  wrote:
> 
>> Hmm.  It would be good to figure out if we are going to remove this
>> compatibility hack in the next major release of Kafka?  In other words, in
>> Kafka 3.0, will we enable TLS 1.3 by default even if the cipher suite is
>> specified?
>> 
>> best,
>> Colin
>> 
>> 
>> On Mon, May 18, 2020, at 09:26, Ismael Juma wrote:
>>> Sounds good.
>>> 
>>> Ismael
>>> 
>>> 
>>> On Mon, May 18, 2020, 9:03 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>>> A safer approach may be to only add TLS 1.3 to the list if the cipher
>>>> suite config has not been specified.
>>>>> So, if TLS 1.3 is added to the list by Kafka, it would seem that it
>>>> would not work if the user specified a list of cipher suites for
>> previous
>>>> TLS versions
>>>> 
>>>> Let’s just add test for this case?
>>>> I can prepare the preliminary PR for this KIP and add this kind of
>> test to
>>>> it.
>>>> 
>>>> What do you think?
>>>> 
>>>> 
>>>>> 18 мая 2020 г., в 18:59, Nikolay Izhikov 
>>>> написал(а):
>>>>> 
>>>>>> 1. I meant that `ssl.protocol` is TLSv1.2 while
>> `ssl.enabled.protocols`
>>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact
>>>>> 
>>>>> `ssl.protocol` is what will be used, by default, in this KIP is stays
>>>> unchanged (TLSv1.2) Please, see [1]
>>>>> `ssl.enabled.protocols` is list of protocols that  *can* be used.
>> This
>>>> value is just passed to the `SSLEngine` implementation.
>>>>> Please, see DefaultSslEngineFactory#createSslEngine [2]
>>>>> 
>>>>>> 2. My question is not about obsolete protocols, it is about people
>>>> using TLS 1.2 with specified cipher suites. How will that behave when
>> TLS
>>>> 1.3 is enabled by default?
>>>>> 
>>>>> They don’t change anything and all just work as expected on java11.
>>>>> 
>>>>>> 3. An additional question is how does this impact Java 8 users?
>>>>> 
>>>>> Yes.
>>>>> If SSLEngine doesn’t support TLSv1.3 then java8 users should
>> explicitly
>>>> modify `ssl.enabled.protocols` and set it to `TLSv1.2`.
>>>>> 
>>>>> [1]
>>>> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L218
>>>>> [2]
>>>> 
>> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L164
>>>>> 
>>>>>> 18 мая 2020 г., в 17:34, Ismael Juma 
>> написал(а):
>>>>>> 
>>>>>> Nikolay,
>>>>>> 
>>>>>> Thanks for the comments. More below:
>>>>>> 
>>>>>> 1. I meant that `ssl.protocol` is TLSv1.2 while
>> `ssl.enabled.protocols`
>>>> is `TLSv1.2, TLSv1.3`. How do these two configs interact?
>>>>>> 2. My question is not about obsolete protocols, it is about people
>>>> using TLS 1.2 with specified cipher suites. How will that behave when
>> TLS
>>>> 1.3 is enabled by default?
>>>>>> 3. An additional question is how does this impact Java 8 users?
>> Java 8
>>>> will receive TLS 1.3 support later this year (
>>>> https://java.com/en/jre-jdk-cryptoroadmap.html), but it currently does
>>>> not support it. One way to handle this would be to check if the
>> underlying
>>>> JVM supports TLS 1.3 before enabling it.
>>>>>> 
>>>>>> I hope this clarifies my questions.
>>>>>> 
>>>>>> Ismael
>>

Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-18 Thread Nikolay Izhikov
> A safer approach may be to only add TLS 1.3 to the list if the cipher suite 
> config has not been specified.
> So, if TLS 1.3 is added to the list by Kafka, it would seem that it would not 
> work if the user specified a list of cipher suites for previous TLS versions

Let’s just add test for this case?
I can prepare the preliminary PR for this KIP and add this kind of test to it.

What do you think?


> 18 мая 2020 г., в 18:59, Nikolay Izhikov  написал(а):
> 
>> 1. I meant that `ssl.protocol` is TLSv1.2 while `ssl.enabled.protocols` is 
>> `TLSv1.2, TLSv1.3`. How do these two configs interact
> 
> `ssl.protocol` is what will be used, by default, in this KIP is stays 
> unchanged (TLSv1.2) Please, see [1]
> `ssl.enabled.protocols` is list of protocols that  *can* be used. This value 
> is just passed to the `SSLEngine` implementation.
> Please, see DefaultSslEngineFactory#createSslEngine [2]
> 
>> 2. My question is not about obsolete protocols, it is about people using TLS 
>> 1.2 with specified cipher suites. How will that behave when TLS 1.3 is 
>> enabled by default?
> 
> They don’t change anything and all just work as expected on java11.
> 
>> 3. An additional question is how does this impact Java 8 users? 
> 
> Yes.
> If SSLEngine doesn’t support TLSv1.3 then java8 users should explicitly 
> modify `ssl.enabled.protocols` and set it to `TLSv1.2`.
> 
> [1] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L218
> [2] 
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L164
> 
>> 18 мая 2020 г., в 17:34, Ismael Juma  написал(а):
>> 
>> Nikolay,
>> 
>> Thanks for the comments. More below:
>> 
>> 1. I meant that `ssl.protocol` is TLSv1.2 while `ssl.enabled.protocols` is 
>> `TLSv1.2, TLSv1.3`. How do these two configs interact?
>> 2. My question is not about obsolete protocols, it is about people using TLS 
>> 1.2 with specified cipher suites. How will that behave when TLS 1.3 is 
>> enabled by default?
>> 3. An additional question is how does this impact Java 8 users? Java 8 will 
>> receive TLS 1.3 support later this year 
>> (https://java.com/en/jre-jdk-cryptoroadmap.html), but it currently does not 
>> support it. One way to handle this would be to check if the underlying JVM 
>> supports TLS 1.3 before enabling it.
>> 
>> I hope this clarifies my questions.
>> 
>> Ismael
>> 
>> On Mon, May 18, 2020 at 6:44 AM Nikolay Izhikov  wrote:
>> Hello, Ismael.
>> 
>> Here is answers to your questions:
>> 
>>> Quick question, the following is meant to include TLSv1.3 as well, right?
>>> Change the value of the SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS to 
>>> «TLSv1.2»
>> 
>> I propose to have the following value 
>> SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS = «TLSv1.2,TLSv.1.3»
>> 
>>> 1. `ssl.protocol` would remain TLSv1.2 with this change. It would be good 
>>> to explain why that's OK.
>> 
>> I think it covered by the following statements in KIP.
>> If you know more trustworthy sources of this kind of information, please, 
>> let me know.
>> 
>> ```
>> For now, only TLS1.2 and TLS1.3 are recommended for the usage, other 
>> versions of TLS considered as obsolete:
>>• https://www.rfc-editor.org/info/rfc8446
>>• 
>> https://en.wikipedia.org/wiki/Transport_Layer_Security#History_and_development
>> 
>> ```
>> 
>>> 2. What is the behavior for people who have configured `ssl.cipher.suites`?
>>> The cipher suite names are different in TLS 1.3. What would be the behavior
>>> if the client requests TLS 1.3, but the server only has cipher suites for
>>> TLS 1.2? It would be good to explain the expected behavior and add tests to 
>>> verify it.
>> 
>> I think those users should update `SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS` 
>> and enable required(but obsolete) version of TLS they use.
>> After one should migrate to the reliable TLS version.
>> This reflected in the KIP:
>> 
>> ```
>> Migration: Users who are using TLSv1.1 and TLSv1 should enable these 
>> versions of the protocol with the explicit configuration property 
>> "ssl.enabled.protocols"
>> ```
>> 
>>> 25 февр. 2020 г., в 08:57, Nikolay Izhikov  
>>> написал(а):
>>> 
>>> Hello.
>>> 
>>> Any feedback on this?
>>> 
>>> This change seems very simple, I can start vote right now if nothing to 
>>> discuss here.
>>> 
>>>> 21 февр. 2020 г., в 15:18, Nikolay Izhikov  
>>>> написал(а):
>>>> 
>>>> Hello, 
>>>> 
>>>> I'd like to start a discussion of KIP [1]
>>>> This is follow-up for the KIP-553 [2]
>>>> 
>>>> Its goal is to enable TLSv1.3 by default.
>>>> 
>>>> Your comments and suggestions are welcome.
>>>> 
>>>> [1] 
>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
>>>> [2] 
>>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
>>> 
>> 
> 



Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-18 Thread Nikolay Izhikov
> 1. I meant that `ssl.protocol` is TLSv1.2 while `ssl.enabled.protocols` is 
> `TLSv1.2, TLSv1.3`. How do these two configs interact

`ssl.protocol` is what will be used, by default, in this KIP is stays unchanged 
(TLSv1.2) Please, see [1]
`ssl.enabled.protocols` is list of protocols that  *can* be used. This value is 
just passed to the `SSLEngine` implementation.
Please, see DefaultSslEngineFactory#createSslEngine [2]

> 2. My question is not about obsolete protocols, it is about people using TLS 
> 1.2 with specified cipher suites. How will that behave when TLS 1.3 is 
> enabled by default?

They don’t change anything and all just work as expected on java11.

> 3. An additional question is how does this impact Java 8 users? 

Yes.
If SSLEngine doesn’t support TLSv1.3 then java8 users should explicitly modify 
`ssl.enabled.protocols` and set it to `TLSv1.2`.

[1] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L218
[2] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/ssl/DefaultSslEngineFactory.java#L164

> 18 мая 2020 г., в 17:34, Ismael Juma  написал(а):
> 
> Nikolay,
> 
> Thanks for the comments. More below:
> 
> 1. I meant that `ssl.protocol` is TLSv1.2 while `ssl.enabled.protocols` is 
> `TLSv1.2, TLSv1.3`. How do these two configs interact?
> 2. My question is not about obsolete protocols, it is about people using TLS 
> 1.2 with specified cipher suites. How will that behave when TLS 1.3 is 
> enabled by default?
> 3. An additional question is how does this impact Java 8 users? Java 8 will 
> receive TLS 1.3 support later this year 
> (https://java.com/en/jre-jdk-cryptoroadmap.html), but it currently does not 
> support it. One way to handle this would be to check if the underlying JVM 
> supports TLS 1.3 before enabling it.
> 
> I hope this clarifies my questions.
> 
> Ismael
> 
> On Mon, May 18, 2020 at 6:44 AM Nikolay Izhikov  wrote:
> Hello, Ismael.
> 
> Here is answers to your questions:
> 
> > Quick question, the following is meant to include TLSv1.3 as well, right?
> > Change the value of the SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS to 
> > «TLSv1.2»
> 
> I propose to have the following value 
> SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS = «TLSv1.2,TLSv.1.3»
> 
> > 1. `ssl.protocol` would remain TLSv1.2 with this change. It would be good 
> > to explain why that's OK.
> 
> I think it covered by the following statements in KIP.
> If you know more trustworthy sources of this kind of information, please, let 
> me know.
> 
> ```
> For now, only TLS1.2 and TLS1.3 are recommended for the usage, other versions 
> of TLS considered as obsolete:
> • https://www.rfc-editor.org/info/rfc8446
> • 
> https://en.wikipedia.org/wiki/Transport_Layer_Security#History_and_development
> 
> ```
> 
> > 2. What is the behavior for people who have configured `ssl.cipher.suites`?
> > The cipher suite names are different in TLS 1.3. What would be the behavior
> > if the client requests TLS 1.3, but the server only has cipher suites for
> > TLS 1.2? It would be good to explain the expected behavior and add tests to 
> > verify it.
> 
> I think those users should update `SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS` 
> and enable required(but obsolete) version of TLS they use.
> After one should migrate to the reliable TLS version.
> This reflected in the KIP:
> 
> ```
> Migration: Users who are using TLSv1.1 and TLSv1 should enable these versions 
> of the protocol with the explicit configuration property 
> "ssl.enabled.protocols"
> ```
> 
> > 25 февр. 2020 г., в 08:57, Nikolay Izhikov  
> > написал(а):
> > 
> > Hello.
> > 
> > Any feedback on this?
> > 
> > This change seems very simple, I can start vote right now if nothing to 
> > discuss here.
> > 
> >> 21 февр. 2020 г., в 15:18, Nikolay Izhikov  
> >> написал(а):
> >> 
> >> Hello, 
> >> 
> >> I'd like to start a discussion of KIP [1]
> >> This is follow-up for the KIP-553 [2]
> >> 
> >> Its goal is to enable TLSv1.3 by default.
> >> 
> >> Your comments and suggestions are welcome.
> >> 
> >> [1] 
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> >> [2] 
> >> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> > 
> 



Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-05-18 Thread Nikolay Izhikov
Hello, Ismael.

Here is answers to your questions:

> Quick question, the following is meant to include TLSv1.3 as well, right?
> Change the value of the SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS to «TLSv1.2»
 
I propose to have the following value SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS 
= «TLSv1.2,TLSv.1.3»
 
> 1. `ssl.protocol` would remain TLSv1.2 with this change. It would be good to 
> explain why that's OK.

I think it covered by the following statements in KIP.
If you know more trustworthy sources of this kind of information, please, let 
me know.

```
For now, only TLS1.2 and TLS1.3 are recommended for the usage, other versions 
of TLS considered as obsolete:
• https://www.rfc-editor.org/info/rfc8446
• 
https://en.wikipedia.org/wiki/Transport_Layer_Security#History_and_development

```

> 2. What is the behavior for people who have configured `ssl.cipher.suites`?
> The cipher suite names are different in TLS 1.3. What would be the behavior
> if the client requests TLS 1.3, but the server only has cipher suites for
> TLS 1.2? It would be good to explain the expected behavior and add tests to 
> verify it.

I think those users should update `SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS` 
and enable required(but obsolete) version of TLS they use.
After one should migrate to the reliable TLS version.
This reflected in the KIP:

```
Migration: Users who are using TLSv1.1 and TLSv1 should enable these versions 
of the protocol with the explicit configuration property "ssl.enabled.protocols"
```

> 25 февр. 2020 г., в 08:57, Nikolay Izhikov  
> написал(а):
> 
> Hello.
> 
> Any feedback on this?
> 
> This change seems very simple, I can start vote right now if nothing to 
> discuss here.
> 
>> 21 февр. 2020 г., в 15:18, Nikolay Izhikov  
>> написал(а):
>> 
>> Hello, 
>> 
>> I'd like to start a discussion of KIP [1]
>> This is follow-up for the KIP-553 [2]
>> 
>> Its goal is to enable TLSv1.3 by default.
>> 
>> Your comments and suggestions are welcome.
>> 
>> [1] 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
>> [2] 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> 



Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-05-18 Thread Nikolay Izhikov
Hello, Ismael.

I think we should move ongoing discussion into KIP-573 discussion [1]

I will respond here and is KIP-573 discussion thread, because, this KIP already 
adopted by [2]

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
[2] 
https://github.com/apache/kafka/commit/172409c44b8551e2315bd93044a8a95ccda4699f

> 18 мая 2020 г., в 01:34, Ismael Juma  написал(а):
> 
> Hi Nikolay,
> 
> Quick question, the following is meant to include TLSv1.3 as well, right?
> 
> Change the value of the SslConfigs.DEFAULT_SSL_ENABLED_PROTOCOLS to
>> "TLSv1.2"
> 
> 
> In addition, two more questions:
> 
> 1. `ssl.protocol` would remain TLSv1.2 with this change. It would be good
> to explain why that's OK.
> 2. What is the behavior for people who have configured `ssl.cipher.suites`?
> The cipher suite names are different in TLS 1.3. What would be the behavior
> if the client requests TLS 1.3, but the server only has cipher suites for
> TLS 1.2? It would be good to explain the expected behavior and add tests to
> verify it.
> 
> Ismael
> 
> On Thu, Apr 30, 2020 at 9:47 AM Nikolay Izhikov  wrote:
> 
>> Ticket created:
>> 
>> https://issues.apache.org/jira/browse/KAFKA-9943
>> 
>> I will prepare the PR, shortly.
>> 
>>> 27 апр. 2020 г., в 17:55, Ismael Juma  написал(а):
>>> 
>>> Yes, a PR would be great.
>>> 
>>> Ismael
>>> 
>>> On Mon, Apr 27, 2020, 2:10 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Hello, Ismael.
>>>> 
>>>> AFAIK we don’t run tests with the TLSv1.3, by default.
>>>> Are you suggesting to do it?
>>>> I can create a PR for it.
>>>> 
>>>>> 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
>>>>> 
>>>>> Right, some companies run them nightly. What I meant to ask is if we
>>>>> changed the configuration so that TLS 1.3 is exercised in the system
>>>> tests
>>>>> by default.
>>>>> 
>>>>> Ismael
>>>>> 
>>>>> On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov 
>>>> wrote:
>>>>> 
>>>>>> Hello, Ismael.
>>>>>> 
>>>>>> AFAIK we don’t run system tests nightly.
>>>>>> Do we have resources to run system tests periodically?
>>>>>> 
>>>>>> When I did the testing I used servers my employer gave me.
>>>>>> 
>>>>>>> 24 апр. 2020 г., в 08:05, Ismael Juma 
>> написал(а):
>>>>>>> 
>>>>>>> Hi Nikolay,
>>>>>>> 
>>>>>>> Seems like we have been able to run the system tests with TLS 1.3. Do
>>>> we
>>>>>>> run them nightly?
>>>>>>> 
>>>>>>> Ismael
>>>>>>> 
>>>>>>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov >> 
>>>>>> wrote:
>>>>>>> 
>>>>>>>> Hello, Kafka team.
>>>>>>>> 
>>>>>>>> I ran system tests that use SSL for the TLSv1.3.
>>>>>>>> You can find the results of the tests in the Jira ticket [1], [2],
>>>> [3],
>>>>>>>> [4].
>>>>>>>> 
>>>>>>>> I also, need a changes [5] in `security_config.py` to execute system
>>>>>> tests
>>>>>>>> with TLSv1.3(more info in PR description).
>>>>>>>> Please, take a look.
>>>>>>>> 
>>>>>>>> Test environment:
>>>>>>>> • openjdk11
>>>>>>>> • trunk + changes from my PR [5].
>>>>>>>> 
>>>>>>>> Full system tests results have volume 15gb.
>>>>>>>> Should I share full logs with you?
>>>>>>>> 
>>>>>>>> What else should be done before we can enable TLSv1.3 by default?
>>>>>>>> 
>>>>>>>> [1]
>>>>>>>> 
>>>>>> 
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
>>>>>>>> 
>>>>>>>> [2]
>>>>>>>> 
>>>>>> 
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-931

[jira] [Created] (KAFKA-9986) Checkpointing API for State Stores

2020-05-13 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-9986:
--

 Summary: Checkpointing API for State Stores
 Key: KAFKA-9986
 URL: https://issues.apache.org/jira/browse/KAFKA-9986
 Project: Kafka
  Issue Type: New Feature
Reporter: Nikolay Izhikov


The parent ticket is KAFKA-3184.

The goal of this ticket is to provide a general checkpointing API for state 
stores in Streams (not only for in-memory but also for persistent stores), 
where the checkpoint location can be either local disks or remote storage. 

Design scope is primarily on:

  # the API design for both checkpointing as well as loading checkpoints into 
the local state stores
  # the mechanism of the checkpointing, e.g. whether it should be async? 
whether it should be executed on separate threads? etc. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[REVIEW REQUEST] KAFKA-3184: Add Checkpoint for In-memory State Store

2020-05-07 Thread Nikolay Izhikov
Hello, Kafka Team.

I prepared a PR [1] for the KAFKA-3184 [2]
Can someone, please, do the review.


[1] https://github.com/apache/kafka/pull/8592
[2] https://issues.apache.org/jira/browse/KAFKA-3184


[jira] [Created] (KAFKA-9943) Enable TLSv.1.3 in system tests "run all" execution.

2020-04-30 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-9943:
--

 Summary: Enable TLSv.1.3 in system tests "run all" execution.
 Key: KAFKA-9943
 URL: https://issues.apache.org/jira/browse/KAFKA-9943
 Project: Kafka
  Issue Type: Test
Reporter: Nikolay Izhikov


We need to enable system tests with the TLSv1.3 in "run all" execution.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-04-30 Thread Nikolay Izhikov
Ticket created:

https://issues.apache.org/jira/browse/KAFKA-9943

I will prepare the PR, shortly.

> 27 апр. 2020 г., в 17:55, Ismael Juma  написал(а):
> 
> Yes, a PR would be great.
> 
> Ismael
> 
> On Mon, Apr 27, 2020, 2:10 AM Nikolay Izhikov  wrote:
> 
>> Hello, Ismael.
>> 
>> AFAIK we don’t run tests with the TLSv1.3, by default.
>> Are you suggesting to do it?
>> I can create a PR for it.
>> 
>>> 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
>>> 
>>> Right, some companies run them nightly. What I meant to ask is if we
>>> changed the configuration so that TLS 1.3 is exercised in the system
>> tests
>>> by default.
>>> 
>>> Ismael
>>> 
>>> On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Hello, Ismael.
>>>> 
>>>> AFAIK we don’t run system tests nightly.
>>>> Do we have resources to run system tests periodically?
>>>> 
>>>> When I did the testing I used servers my employer gave me.
>>>> 
>>>>> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
>>>>> 
>>>>> Hi Nikolay,
>>>>> 
>>>>> Seems like we have been able to run the system tests with TLS 1.3. Do
>> we
>>>>> run them nightly?
>>>>> 
>>>>> Ismael
>>>>> 
>>>>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov 
>>>> wrote:
>>>>> 
>>>>>> Hello, Kafka team.
>>>>>> 
>>>>>> I ran system tests that use SSL for the TLSv1.3.
>>>>>> You can find the results of the tests in the Jira ticket [1], [2],
>> [3],
>>>>>> [4].
>>>>>> 
>>>>>> I also, need a changes [5] in `security_config.py` to execute system
>>>> tests
>>>>>> with TLSv1.3(more info in PR description).
>>>>>> Please, take a look.
>>>>>> 
>>>>>> Test environment:
>>>>>>  • openjdk11
>>>>>>  • trunk + changes from my PR [5].
>>>>>> 
>>>>>> Full system tests results have volume 15gb.
>>>>>> Should I share full logs with you?
>>>>>> 
>>>>>> What else should be done before we can enable TLSv1.3 by default?
>>>>>> 
>>>>>> [1]
>>>>>> 
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
>>>>>> 
>>>>>> [2]
>>>>>> 
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
>>>>>> 
>>>>>> [3]
>>>>>> 
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
>>>>>> 
>>>>>> [4]
>>>>>> 
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
>>>>>> 
>>>>>> [5]
>>>>>> 
>>>> 
>> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
>>>>>> 
>>>>>>> 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
>>>>>> написал(а):
>>>>>>> 
>>>>>>> Hello, Rajini.
>>>>>>> 
>>>>>>> Thanks for the feedback.
>>>>>>> 
>>>>>>> I’ve searched tests by the «ssl» keyword and found the following
>> tests:
>>>>>>> 
>>>>>>> ./test/kafkatest/services/kafka_log4j_appender.py
>>>>>>> ./test/kafkatest/services/listener_security_config.py
>>>>>>> ./test/kafkatest/services/security/security_config.py
>>>>>>> ./test/kafkatest/tests/core/security_test.py
>>>>>>> 
>>>>>>> Is this all tests that need to be run with the TLSv1.3 to ensure we
>> can
>>>>>> enable it by default?
>>>>>>> 
>>>>>>>> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
>>>>>> н

Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-04-27 Thread Nikolay Izhikov
Hello, Ismael.

AFAIK we don’t run tests with the TLSv1.3, by default.
Are you suggesting to do it?
I can create a PR for it.

> 24 апр. 2020 г., в 17:34, Ismael Juma  написал(а):
> 
> Right, some companies run them nightly. What I meant to ask is if we
> changed the configuration so that TLS 1.3 is exercised in the system tests
> by default.
> 
> Ismael
> 
> On Fri, Apr 24, 2020 at 7:32 AM Nikolay Izhikov  wrote:
> 
>> Hello, Ismael.
>> 
>> AFAIK we don’t run system tests nightly.
>> Do we have resources to run system tests periodically?
>> 
>> When I did the testing I used servers my employer gave me.
>> 
>>> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> Seems like we have been able to run the system tests with TLS 1.3. Do we
>>> run them nightly?
>>> 
>>> Ismael
>>> 
>>> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov 
>> wrote:
>>> 
>>>> Hello, Kafka team.
>>>> 
>>>> I ran system tests that use SSL for the TLSv1.3.
>>>> You can find the results of the tests in the Jira ticket [1], [2], [3],
>>>> [4].
>>>> 
>>>> I also, need a changes [5] in `security_config.py` to execute system
>> tests
>>>> with TLSv1.3(more info in PR description).
>>>> Please, take a look.
>>>> 
>>>> Test environment:
>>>>   • openjdk11
>>>>   • trunk + changes from my PR [5].
>>>> 
>>>> Full system tests results have volume 15gb.
>>>> Should I share full logs with you?
>>>> 
>>>> What else should be done before we can enable TLSv1.3 by default?
>>>> 
>>>> [1]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
>>>> 
>>>> [2]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
>>>> 
>>>> [3]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
>>>> 
>>>> [4]
>>>> 
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
>>>> 
>>>> [5]
>>>> 
>> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
>>>> 
>>>>> 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
>>>> написал(а):
>>>>> 
>>>>> Hello, Rajini.
>>>>> 
>>>>> Thanks for the feedback.
>>>>> 
>>>>> I’ve searched tests by the «ssl» keyword and found the following tests:
>>>>> 
>>>>> ./test/kafkatest/services/kafka_log4j_appender.py
>>>>> ./test/kafkatest/services/listener_security_config.py
>>>>> ./test/kafkatest/services/security/security_config.py
>>>>> ./test/kafkatest/tests/core/security_test.py
>>>>> 
>>>>> Is this all tests that need to be run with the TLSv1.3 to ensure we can
>>>> enable it by default?
>>>>> 
>>>>>> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
>>>> написал(а):
>>>>>> 
>>>>>> Hi Nikolay,
>>>>>> 
>>>>>> Not sure of the total space required. But you can run a collection of
>>>> tests at a time instead of running them all together. That way, you
>> could
>>>> just run all the tests that enable SSL. Details of running a subset of
>>>> tests are in the README in tests.
>>>>>> 
>>>>>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov 
>>>> wrote:
>>>>>> Hello, Rajini.
>>>>>> 
>>>>>> I’m tried to run all system tests but failed for now.
>>>>>> It happens, that system tests generates a lot of logs.
>>>>>> I had a 250GB of the free space but it all was occupied by the log
>> from
>>>> half of the system tests.
>>>>>> 
>>>>>> Do you have any idea what is summary disc space I need to run all
>>>> system tests?
>>>>>> 
>>>>>>> 7 

Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-04-24 Thread Nikolay Izhikov
Hello, Ismael.

AFAIK we don’t run system tests nightly.
Do we have resources to run system tests periodically?

When I did the testing I used servers my employer gave me.

> 24 апр. 2020 г., в 08:05, Ismael Juma  написал(а):
> 
> Hi Nikolay,
> 
> Seems like we have been able to run the system tests with TLS 1.3. Do we
> run them nightly?
> 
> Ismael
> 
> On Fri, Feb 14, 2020 at 4:17 AM Nikolay Izhikov  wrote:
> 
>> Hello, Kafka team.
>> 
>> I ran system tests that use SSL for the TLSv1.3.
>> You can find the results of the tests in the Jira ticket [1], [2], [3],
>> [4].
>> 
>> I also, need a changes [5] in `security_config.py` to execute system tests
>> with TLSv1.3(more info in PR description).
>> Please, take a look.
>> 
>> Test environment:
>>• openjdk11
>>• trunk + changes from my PR [5].
>> 
>> Full system tests results have volume 15gb.
>> Should I share full logs with you?
>> 
>> What else should be done before we can enable TLSv1.3 by default?
>> 
>> [1]
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927
>> 
>> [2]
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928
>> 
>> [3]
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929
>> 
>> [4]
>> https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930
>> 
>> [5]
>> https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51
>> 
>>> 29 янв. 2020 г., в 15:27, Nikolay Izhikov 
>> написал(а):
>>> 
>>> Hello, Rajini.
>>> 
>>> Thanks for the feedback.
>>> 
>>> I’ve searched tests by the «ssl» keyword and found the following tests:
>>> 
>>> ./test/kafkatest/services/kafka_log4j_appender.py
>>> ./test/kafkatest/services/listener_security_config.py
>>> ./test/kafkatest/services/security/security_config.py
>>> ./test/kafkatest/tests/core/security_test.py
>>> 
>>> Is this all tests that need to be run with the TLSv1.3 to ensure we can
>> enable it by default?
>>> 
>>>> 28 янв. 2020 г., в 14:58, Rajini Sivaram 
>> написал(а):
>>>> 
>>>> Hi Nikolay,
>>>> 
>>>> Not sure of the total space required. But you can run a collection of
>> tests at a time instead of running them all together. That way, you could
>> just run all the tests that enable SSL. Details of running a subset of
>> tests are in the README in tests.
>>>> 
>>>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov 
>> wrote:
>>>> Hello, Rajini.
>>>> 
>>>> I’m tried to run all system tests but failed for now.
>>>> It happens, that system tests generates a lot of logs.
>>>> I had a 250GB of the free space but it all was occupied by the log from
>> half of the system tests.
>>>> 
>>>> Do you have any idea what is summary disc space I need to run all
>> system tests?
>>>> 
>>>>> 7 янв. 2020 г., в 14:49, Rajini Sivaram 
>> написал(а):
>>>>> 
>>>>> Hi Nikolay,
>>>>> 
>>>>> There a couple of things you could do:
>>>>> 
>>>>> 1) Run all system tests that use SSL with TLSv1.3. I had run a subset,
>> but
>>>>> it will be good to run all of them. You can do this locally using
>> docker
>>>>> with JDK 11 by updating the files in tests/docker. You will need to
>> update
>>>>> tests/kafkatest/services/security/security_config.py to enable only
>>>>> TLSv1.3. Instructions for running system tests using docker are in
>>>>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>>>>> 2) For integration tests, we run a small number of tests using TLSv1.3
>> if
>>>>> the tests are run using JDK 11 and above. We need to do this for system
>>>>> tests as well. There is an open JIRA:
>>>>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to assign
>> this
>>>>> to yourself if you have time to do this.
>>>>> 
>>>>> Regards,
>

Re: Need for histogram type for Kafka Connect JMX Metrics

2020-03-19 Thread Nikolay Izhikov
Hello, Kanupriya

Recently, I’ve implemented a histogram metric [1] in Apache Ignite.
My implementation can work on a stream load.

It just counts events that is fall into the predefined intervals(buckets).
I think histogram can be useful to measure consumer lag or similar metrics.

So, If the community is interested we can have the same approach in Kafka.

[1] 
https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/metric/impl/HistogramMetricImpl.java

> 18 марта 2020 г., в 13:29, Kanupriya Batra  
> написал(а):
> 
> Hi,
> 
> 
> 
> We are using kafka-connect framework in our use-case, and would also like to 
> track metrics for connectors/task. But currently Kafka Connect JMX metrics 
> are not supporting histogram types due to which we are not able to plot 
> percentiles on a lot of important metrics like :
> kafka_connect_sink_task_metrics_put_batch_avg_time_ms, or 
> kafka_connect_sink_task_metrics_sink_record_read_rate.
> I see there is a KIP Ticket 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-196%3A+Add+metrics+to+Kafka+Connect+framework
> 
> ) tracking the same, but it’s an old one from 2017 and there isn’t any 
> progress after that.
> Could you let me know if there are plans to support percentile metrics soon, 
> if not, how can I achieve it from my end.
> 
> 
> 
> 
> 
> 
> 
> 
> Thanks,
> Kanupriya



[VOTE] KIP-573: Enable TLSv1.3 by default

2020-03-02 Thread Nikolay Izhikov
Hello.

I would like to start vote for KIP-573: Enable TLSv1.3 by default

KIP - 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
Discussion thread - 
https://lists.apache.org/thread.html/r1158b6caf416e7db802780de71115b3e2d3ef2c4664b7ec8cb32ea86%40%3Cdev.kafka.apache.org%3E

Re: Fix of System tests on JDK11

2020-02-28 Thread Nikolay Izhikov
Hello, Kafka team.

PR [1] seems to be ready for merge.

If fixes system tests that start earlier versions of Kafka with JDK11 [1]

I got LGTM from Guozhang and Ron Dagostino
Can someone, please, make a final review?

[1] https://github.com/apache/kafka/pull/8138

> 21 февр. 2020 г., в 16:30, Nikolay Izhikov  
> написал(а):
> 
> Hello, Kafka team.
> 
> I found that system tests that starts earlier versions of Kafka doesn’t work 
> with JDK11 [1]
> 
> There is two main reason for it:
> 
>* Kafka startup scripts contains removed JVM options like 
> `-XX:+PrintGCDateStamps or `-XX:UseParNewGC`.
>* 0.10.0.1, 0.10.1.1, 0.10.2.2, 0.11.0.3 depends on JAXB that was removed 
> in JDK11.
> 
> I fixed both of this issues.
> Tests results [3]
> 
>1. Can you, please, review my changes?
>2. Now `upgrade_test.py` has only 2 failed tests for 0.8.2.2. Stack trace 
> below.
>   I seems like unfixable for me. 
>   Can someone suggest how to fix it?
>3. What tests should be checked on JDK11 for these changes?
> 
> ```
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/apache/kafka/common/utils/Exit
>at 
> org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:540)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.kafka.common.utils.Exit
>at 
> java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
>at 
> java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
>at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
>... 1 more
> 
> ```
> 
> I can provide full tests logs if you need it. Full size 7.5gb.
> 
> [1] https://issues.apache.org/jira/browse/KAFKA-9573
> [2] https://github.com/apache/kafka/pull/8138
> [3] 
> https://issues.apache.org/jira/browse/KAFKA-9573?focusedCommentId=17041847=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17041847



Re: [DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-02-24 Thread Nikolay Izhikov
Hello.

Any feedback on this?

This change seems very simple, I can start vote right now if nothing to discuss 
here.

> 21 февр. 2020 г., в 15:18, Nikolay Izhikov  
> написал(а):
> 
> Hello, 
> 
> I'd like to start a discussion of KIP [1]
> This is follow-up for the KIP-553 [2]
> 
> Its goal is to enable TLSv1.3 by default.
> 
> Your comments and suggestions are welcome.
> 
> [1] 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
> [2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956



Fix of System tests on JDK11

2020-02-21 Thread Nikolay Izhikov
Hello, Kafka team.

I found that system tests that starts earlier versions of Kafka doesn’t work 
with JDK11 [1]

There is two main reason for it:

* Kafka startup scripts contains removed JVM options like 
`-XX:+PrintGCDateStamps or `-XX:UseParNewGC`.
* 0.10.0.1, 0.10.1.1, 0.10.2.2, 0.11.0.3 depends on JAXB that was removed 
in JDK11.

I fixed both of this issues.
Tests results [3]

1. Can you, please, review my changes?
2. Now `upgrade_test.py` has only 2 failed tests for 0.8.2.2. Stack trace 
below.
I seems like unfixable for me. 
Can someone suggest how to fix it?
3. What tests should be checked on JDK11 for these changes? 

```
Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/kafka/common/utils/Exit
at 
org.apache.kafka.tools.VerifiableProducer.main(VerifiableProducer.java:540)
Caused by: java.lang.ClassNotFoundException: org.apache.kafka.common.utils.Exit
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 1 more

```

I can provide full tests logs if you need it. Full size 7.5gb.

[1] https://issues.apache.org/jira/browse/KAFKA-9573
[2] https://github.com/apache/kafka/pull/8138
[3] 
https://issues.apache.org/jira/browse/KAFKA-9573?focusedCommentId=17041847=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17041847

[DISCUSS] KIP-573: Enable TLSv1.3 by default

2020-02-21 Thread Nikolay Izhikov
Hello, 

I'd like to start a discussion of KIP [1]
This is follow-up for the KIP-553 [2]

Its goal is to enable TLSv1.3 by default.

Your comments and suggestions are welcome.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-573%3A+Enable+TLSv1.3+by+default
[2] https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956

[jira] [Created] (KAFKA-9573) TestUpgrade system test failed on Java11.

2020-02-19 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-9573:
--

 Summary: TestUpgrade system test failed on Java11.
 Key: KAFKA-9573
 URL: https://issues.apache.org/jira/browse/KAFKA-9573
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov


Test was runed on Jdk11
Test result:

{noformat}

test_id:
kafkatest.tests.core.upgrade_test.TestUpgrade.test_upgrade.from_kafka_version=0.9.0.1.to_message_format_version=None.security_protocol=SASL_SSL.compression_types=.none
status: FAIL
run time:   1 minute 28.387 seconds


Kafka server didn't finish startup in 60 seconds
Traceback (most recent call last):
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", line 
132, in run
data = self.run_test()
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", line 
189, in run_test
return self.test_context.function(self.test)
  File "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 
428, in wrapper
return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
  File "/opt/kafka-dev/tests/kafkatest/tests/core/upgrade_test.py", line 133, 
in test_upgrade
self.kafka.start()
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 242, in 
start
Service.start(self)
  File "/usr/local/lib/python2.7/dist-packages/ducktape/services/service.py", 
line 234, in start
self.start_node(node)
  File "/opt/kafka-dev/tests/kafkatest/services/kafka/kafka.py", line 357, in 
start_node
err_msg="Kafka server didn't finish startup in %d seconds" % timeout_sec)
  File 
"/usr/local/lib/python2.7/dist-packages/ducktape/cluster/remoteaccount.py", 
line 705, in wait_until
allow_fail=True) == 0, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/ducktape/utils/util.py", line 
41, in wait_until
raise TimeoutError(err_msg() if callable(err_msg) else err_msg)
TimeoutError: Kafka server didn't finish startup in 60 seconds
{noformat}

Detailed output:

{noformat}
[0.001s][warning][gc] -Xloggc is deprecated. Will use 
-Xlog:gc:/opt/kafka-0.9.0.1/bin/../logs/kafkaServer-gc.log instead.
Unrecognized VM option 'PrintGCDateStamps'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
~ 
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-02-14 Thread Nikolay Izhikov
Hello, Kafka team.

I ran system tests that use SSL for the TLSv1.3. 
You can find the results of the tests in the Jira ticket [1], [2], [3], [4].

I also, need a changes [5] in `security_config.py` to execute system tests with 
TLSv1.3(more info in PR description).
Please, take a look.

Test environment:
• openjdk11
• trunk + changes from my PR [5].

Full system tests results have volume 15gb.
Should I share full logs with you?

What else should be done before we can enable TLSv1.3 by default?

[1] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036927=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036927

[2] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036928=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036928

[3] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036929=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036929

[4] 
https://issues.apache.org/jira/browse/KAFKA-9319?focusedCommentId=17036930=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17036930

[5] 
https://github.com/apache/kafka/pull/8106/files#diff-6dd015b94706f6920d9de524c355ddd8R51

> 29 янв. 2020 г., в 15:27, Nikolay Izhikov  написал(а):
> 
> Hello, Rajini.
> 
> Thanks for the feedback.
> 
> I’ve searched tests by the «ssl» keyword and found the following tests:
> 
> ./test/kafkatest/services/kafka_log4j_appender.py
> ./test/kafkatest/services/listener_security_config.py
> ./test/kafkatest/services/security/security_config.py
> ./test/kafkatest/tests/core/security_test.py
> 
> Is this all tests that need to be run with the TLSv1.3 to ensure we can 
> enable it by default?
> 
>> 28 янв. 2020 г., в 14:58, Rajini Sivaram  
>> написал(а):
>> 
>> Hi Nikolay,
>> 
>> Not sure of the total space required. But you can run a collection of tests 
>> at a time instead of running them all together. That way, you could just run 
>> all the tests that enable SSL. Details of running a subset of tests are in 
>> the README in tests.
>> 
>> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov  wrote:
>> Hello, Rajini.
>> 
>> I’m tried to run all system tests but failed for now.
>> It happens, that system tests generates a lot of logs.
>> I had a 250GB of the free space but it all was occupied by the log from half 
>> of the system tests.
>> 
>> Do you have any idea what is summary disc space I need to run all system 
>> tests?  
>> 
>>> 7 янв. 2020 г., в 14:49, Rajini Sivaram  
>>> написал(а):
>>> 
>>> Hi Nikolay,
>>> 
>>> There a couple of things you could do:
>>> 
>>> 1) Run all system tests that use SSL with TLSv1.3. I had run a subset, but
>>> it will be good to run all of them. You can do this locally using docker
>>> with JDK 11 by updating the files in tests/docker. You will need to update
>>> tests/kafkatest/services/security/security_config.py to enable only
>>> TLSv1.3. Instructions for running system tests using docker are in
>>> https://github.com/apache/kafka/blob/trunk/tests/README.md.
>>> 2) For integration tests, we run a small number of tests using TLSv1.3 if
>>> the tests are run using JDK 11 and above. We need to do this for system
>>> tests as well. There is an open JIRA:
>>> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to assign this
>>> to yourself if you have time to do this.
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> 
>>> On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков  wrote:
>>> 
>>>> Hello, Rajini.
>>>> 
>>>> Can you, please, clarify, what should be done?
>>>> I can try to do tests by myself.
>>>> 
>>>>> 6 янв. 2020 г., в 21:29, Rajini Sivaram 
>>>> написал(а):
>>>>> 
>>>>> Hi Brajesh.
>>>>> 
>>>>> No one is working on this yet, but will follow up with the Confluent
>>>> tools
>>>>> team to see when this can be done.
>>>>> 
>>>>> On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar 
>>>> wrote:
>>>>> 
>>>>>> Hello Rajini,
>>>>>> 
>>>>>> What is the plan to run system tests using JDK 11? Is someone working on
>>>>>> this?
>>>>>> 
>>>>>> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram 
>>>>>> wrote:
>>>>>> 
>>>>>>> Hi Nikolay,
>>>>>

Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-01-29 Thread Nikolay Izhikov
Hello, Rajini.

Thanks for the feedback.

I’ve searched tests by the «ssl» keyword and found the following tests:

./test/kafkatest/services/kafka_log4j_appender.py
./test/kafkatest/services/listener_security_config.py
./test/kafkatest/services/security/security_config.py
./test/kafkatest/tests/core/security_test.py

Is this all tests that need to be run with the TLSv1.3 to ensure we can enable 
it by default?

> 28 янв. 2020 г., в 14:58, Rajini Sivaram  написал(а):
> 
> Hi Nikolay,
> 
> Not sure of the total space required. But you can run a collection of tests 
> at a time instead of running them all together. That way, you could just run 
> all the tests that enable SSL. Details of running a subset of tests are in 
> the README in tests.
> 
> On Mon, Jan 27, 2020 at 6:29 PM Nikolay Izhikov  wrote:
> Hello, Rajini.
> 
> I’m tried to run all system tests but failed for now.
> It happens, that system tests generates a lot of logs.
> I had a 250GB of the free space but it all was occupied by the log from half 
> of the system tests.
> 
> Do you have any idea what is summary disc space I need to run all system 
> tests?  
> 
> > 7 янв. 2020 г., в 14:49, Rajini Sivaram  
> > написал(а):
> > 
> > Hi Nikolay,
> > 
> > There a couple of things you could do:
> > 
> > 1) Run all system tests that use SSL with TLSv1.3. I had run a subset, but
> > it will be good to run all of them. You can do this locally using docker
> > with JDK 11 by updating the files in tests/docker. You will need to update
> > tests/kafkatest/services/security/security_config.py to enable only
> > TLSv1.3. Instructions for running system tests using docker are in
> > https://github.com/apache/kafka/blob/trunk/tests/README.md.
> > 2) For integration tests, we run a small number of tests using TLSv1.3 if
> > the tests are run using JDK 11 and above. We need to do this for system
> > tests as well. There is an open JIRA:
> > https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to assign this
> > to yourself if you have time to do this.
> > 
> > Regards,
> > 
> > Rajini
> > 
> > 
> > On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков  wrote:
> > 
> >> Hello, Rajini.
> >> 
> >> Can you, please, clarify, what should be done?
> >> I can try to do tests by myself.
> >> 
> >>> 6 янв. 2020 г., в 21:29, Rajini Sivaram 
> >> написал(а):
> >>> 
> >>> Hi Brajesh.
> >>> 
> >>> No one is working on this yet, but will follow up with the Confluent
> >> tools
> >>> team to see when this can be done.
> >>> 
> >>> On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar 
> >> wrote:
> >>> 
> >>>> Hello Rajini,
> >>>> 
> >>>> What is the plan to run system tests using JDK 11? Is someone working on
> >>>> this?
> >>>> 
> >>>> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram 
> >>>> wrote:
> >>>> 
> >>>>> Hi Nikolay,
> >>>>> 
> >>>>> We can leave the KIP open and restart the discussion once system tests
> >>>> are
> >>>>> running.
> >>>>> 
> >>>>> Thanks,
> >>>>> 
> >>>>> Rajini
> >>>>> 
> >>>>> On Mon, Jan 6, 2020 at 2:46 PM Николай Ижиков 
> >>>> wrote:
> >>>>> 
> >>>>>> Hello, Rajini.
> >>>>>> 
> >>>>>> Thanks, for the feedback.
> >>>>>> 
> >>>>>> Should I mark this KIP as declined?
> >>>>>> Or just wait for the system tests results?
> >>>>>> 
> >>>>>>> 6 янв. 2020 г., в 17:26, Rajini Sivaram 
> >>>>>> написал(а):
> >>>>>>> 
> >>>>>>> Hi Nikolay,
> >>>>>>> 
> >>>>>>> Thanks for the KIP. We currently run system tests using JDK 8 and
> >>>> hence
> >>>>>> we
> >>>>>>> don't yet have full system test results with TLS 1.3 which requires
> >>>> JDK
> >>>>>> 11.
> >>>>>>> We should wait until that is done before enabling TLS1.3 by default.
> >>>>>>> 
> >>>>>>> Regards,
> >>>>>>> 
> >>>>>>> Rajini
> >>>>>>> 
> >>>>>>> 
> >>>>>>> On Mon, Dec 30, 2019 at 5:36 AM Николай Ижиков 
> >>>>>> wrote:
> >>>>>>> 
> >>>>>>>> Hello, Team.
> >>>>>>>> 
> >>>>>>>> Any feedback on this KIP?
> >>>>>>>> Do we need this in Kafka?
> >>>>>>>> 
> >>>>>>>>> 24 дек. 2019 г., в 18:28, Nikolay Izhikov 
> >>>>>>>> написал(а):
> >>>>>>>>> 
> >>>>>>>>> Hello,
> >>>>>>>>> 
> >>>>>>>>> I'd like to start a discussion of KIP.
> >>>>>>>>> Its goal is to enable TLSv1.3 and disable obsolete versions by
> >>>>> default.
> >>>>>>>>> 
> >>>>>>>>> 
> >>>>>>>> 
> >>>>>> 
> >>>>> 
> >>>> 
> >> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
> >>>>>>>>> 
> >>>>>>>>> Your comments and suggestions are welcome.
> >>>>>>>>> 
> >>>>>>>> 
> >>>>>>>> 
> >>>>>> 
> >>>>>> 
> >>>>> 
> >>>> 
> >>>> 
> >>>> --
> >>>> Regards,
> >>>> Brajesh Kumar
> >>>> 
> >> 
> >> 
> 



Re: [VOTE] KIP-553: Disable all SSL protocols except TLSV1.2 by default.

2020-01-28 Thread Nikolay Izhikov
KIP adopted by 
https://github.com/apache/kafka/commit/172409c44b8551e2315bd93044a8a95ccda4699f

> 27 янв. 2020 г., в 13:10, Nikolay Izhikov  написал(а):
> 
> Thanks everyone!
> 
> After 3+ business days since this thread started, I'm concluding the vote
> on KIP-553.
> 
> The KIP has passed with:
> 
> 4 binding votes from Mickael Maison, Manikumar, Rajini Sivaram, M. Manna.
> 2 non-binding vote from Ted Yu, Ron Dagostino.
> 
> Thank you all for voting!
> 
>> 22 янв. 2020 г., в 14:43, M. Manna  написал(а):
>> 
>> +1 (binding). A simple, and yet powerful enforcement of TLS version.
>> 
>> Thanks for this KIP :)
>> 
>> On Tue, 21 Jan 2020 at 20:39, Mickael Maison 
>> wrote:
>> 
>>> +1 (binding)
>>> Thanks
>>> 
>>> On Tue, Jan 21, 2020 at 7:58 PM Ron Dagostino  wrote:
>>>> 
>>>> +1 (non-binding)
>>>> 
>>>> Ron
>>>> 
>>>> On Tue, Jan 21, 2020 at 11:29 AM Manikumar 
>>> wrote:
>>>>> 
>>>>> +1 (binding).
>>>>> 
>>>>> Thanks for the KIP.
>>>>> 
>>>>> 
>>>>> On Tue, Jan 21, 2020 at 9:56 PM Ted Yu  wrote:
>>>>> 
>>>>>> +1
>>>>>> 
>>>>>> On Tue, Jan 21, 2020 at 8:24 AM Rajini Sivaram <
>>> rajinisiva...@gmail.com>
>>>>>> wrote:
>>>>>> 
>>>>>>> +1 (binding)
>>>>>>> 
>>>>>>> Thanks for the KIP!
>>>>>>> 
>>>>>>> Regards,
>>>>>>> 
>>>>>>> Rajini
>>>>>>> 
>>>>>>> 
>>>>>>> On Tue, Jan 21, 2020 at 3:43 PM Николай Ижиков <
>>> nizhi...@apache.org>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Hello.
>>>>>>>> 
>>>>>>>> I would like to start vote for KIP-553: Disable all SSL protocols
>>>>>> except
>>>>>>>> TLSV1.2 by default.
>>>>>>>> 
>>>>>>>> KIP -
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
>>>>>>>> Discussion thread -
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>> https://lists.apache.org/thread.html/9c6201fe403a24f84fc3aa27f47dd06b718c1d80de0ee3412b9b877c%40%3Cdev.kafka.apache.org%3E
>>>>>>> 
>>>>>> 
>>> 
> 



Re: [DISCUSS] KIP-561: Regex Expressions Support for ConsumerGroupCommand

2020-01-28 Thread Nikolay Izhikov
Hello, Alexander.

As I can see from the previous discussion - you got positive feedback from the 
community.
If you resolved all the comments and suggestions I think you should consider 
starting voting for this KIP.

> 28 янв. 2020 г., в 10:56, Alexander Dunayevsky  
> написал(а):
> 
> Any additional feedback on this?
> 
> Best Regards,
> Alex Dunayevsky
> 
> 
> On Thu, 23 Jan 2020, 11:39 Alexander Dunayevsky, 
> wrote:
> 
>> Hello guys,
>> 
>> Let's discuss KIP-561 Regex Support for ConsumerGroupCommand:
>> 
>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-561%3A+Regex+Support+for+ConsumerGroupCommand
>> 
>> Functionality already implemented and waiting to be reviewed.
>> 
>> Best Regards,
>> Alex Dunayevsky
>> 
>> 
>> On Thu, 16 Jan 2020, 14:25 Alex D,  wrote:
>> 
>>> Hello, guys,
>>> 
>>> Please review Regex Expressions Support for ConsumerGroupCommand
>>> improvement proposal
>>> 
>>>   - *Previous Discussion 1*: Re: Multiple Consumer Group Management
>>>   
>>>   - *Previous Discussion 2*: Re: ConsumerGroupCommand tool improvement?
>>>   
>>> 
>>> *JIRA*: KAFKA-7817 Multiple Consumer Group Management with Regex
>>> 
>>> 
>>> *PR*: #6700 
>>> 
>> 



Re: [DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2020-01-27 Thread Nikolay Izhikov
Hello, Rajini.

I’m tried to run all system tests but failed for now.
It happens, that system tests generates a lot of logs.
I had a 250GB of the free space but it all was occupied by the log from half of 
the system tests.

Do you have any idea what is summary disc space I need to run all system tests? 
 

> 7 янв. 2020 г., в 14:49, Rajini Sivaram  написал(а):
> 
> Hi Nikolay,
> 
> There a couple of things you could do:
> 
> 1) Run all system tests that use SSL with TLSv1.3. I had run a subset, but
> it will be good to run all of them. You can do this locally using docker
> with JDK 11 by updating the files in tests/docker. You will need to update
> tests/kafkatest/services/security/security_config.py to enable only
> TLSv1.3. Instructions for running system tests using docker are in
> https://github.com/apache/kafka/blob/trunk/tests/README.md.
> 2) For integration tests, we run a small number of tests using TLSv1.3 if
> the tests are run using JDK 11 and above. We need to do this for system
> tests as well. There is an open JIRA:
> https://issues.apache.org/jira/browse/KAFKA-9319. Feel free to assign this
> to yourself if you have time to do this.
> 
> Regards,
> 
> Rajini
> 
> 
> On Tue, Jan 7, 2020 at 5:15 AM Николай Ижиков  wrote:
> 
>> Hello, Rajini.
>> 
>> Can you, please, clarify, what should be done?
>> I can try to do tests by myself.
>> 
>>> 6 янв. 2020 г., в 21:29, Rajini Sivaram 
>> написал(а):
>>> 
>>> Hi Brajesh.
>>> 
>>> No one is working on this yet, but will follow up with the Confluent
>> tools
>>> team to see when this can be done.
>>> 
>>> On Mon, Jan 6, 2020 at 3:29 PM Brajesh Kumar 
>> wrote:
>>> 
>>>> Hello Rajini,
>>>> 
>>>> What is the plan to run system tests using JDK 11? Is someone working on
>>>> this?
>>>> 
>>>> On Mon, Jan 6, 2020 at 3:00 PM Rajini Sivaram 
>>>> wrote:
>>>> 
>>>>> Hi Nikolay,
>>>>> 
>>>>> We can leave the KIP open and restart the discussion once system tests
>>>> are
>>>>> running.
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Rajini
>>>>> 
>>>>> On Mon, Jan 6, 2020 at 2:46 PM Николай Ижиков 
>>>> wrote:
>>>>> 
>>>>>> Hello, Rajini.
>>>>>> 
>>>>>> Thanks, for the feedback.
>>>>>> 
>>>>>> Should I mark this KIP as declined?
>>>>>> Or just wait for the system tests results?
>>>>>> 
>>>>>>> 6 янв. 2020 г., в 17:26, Rajini Sivaram 
>>>>>> написал(а):
>>>>>>> 
>>>>>>> Hi Nikolay,
>>>>>>> 
>>>>>>> Thanks for the KIP. We currently run system tests using JDK 8 and
>>>> hence
>>>>>> we
>>>>>>> don't yet have full system test results with TLS 1.3 which requires
>>>> JDK
>>>>>> 11.
>>>>>>> We should wait until that is done before enabling TLS1.3 by default.
>>>>>>> 
>>>>>>> Regards,
>>>>>>> 
>>>>>>> Rajini
>>>>>>> 
>>>>>>> 
>>>>>>> On Mon, Dec 30, 2019 at 5:36 AM Николай Ижиков 
>>>>>> wrote:
>>>>>>> 
>>>>>>>> Hello, Team.
>>>>>>>> 
>>>>>>>> Any feedback on this KIP?
>>>>>>>> Do we need this in Kafka?
>>>>>>>> 
>>>>>>>>> 24 дек. 2019 г., в 18:28, Nikolay Izhikov 
>>>>>>>> написал(а):
>>>>>>>>> 
>>>>>>>>> Hello,
>>>>>>>>> 
>>>>>>>>> I'd like to start a discussion of KIP.
>>>>>>>>> Its goal is to enable TLSv1.3 and disable obsolete versions by
>>>>> default.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
>>>>>>>>> 
>>>>>>>>> Your comments and suggestions are welcome.
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> --
>>>> Regards,
>>>> Brajesh Kumar
>>>> 
>> 
>> 



Re: [VOTE] KIP-553: Disable all SSL protocols except TLSV1.2 by default.

2020-01-27 Thread Nikolay Izhikov
Thanks everyone!

After 3+ business days since this thread started, I'm concluding the vote
on KIP-553.

The KIP has passed with:

4 binding votes from Mickael Maison, Manikumar, Rajini Sivaram, M. Manna.
2 non-binding vote from Ted Yu, Ron Dagostino.

Thank you all for voting!

> 22 янв. 2020 г., в 14:43, M. Manna  написал(а):
> 
> +1 (binding). A simple, and yet powerful enforcement of TLS version.
> 
> Thanks for this KIP :)
> 
> On Tue, 21 Jan 2020 at 20:39, Mickael Maison 
> wrote:
> 
>> +1 (binding)
>> Thanks
>> 
>> On Tue, Jan 21, 2020 at 7:58 PM Ron Dagostino  wrote:
>>> 
>>> +1 (non-binding)
>>> 
>>> Ron
>>> 
>>> On Tue, Jan 21, 2020 at 11:29 AM Manikumar 
>> wrote:
 
 +1 (binding).
 
 Thanks for the KIP.
 
 
 On Tue, Jan 21, 2020 at 9:56 PM Ted Yu  wrote:
 
> +1
> 
> On Tue, Jan 21, 2020 at 8:24 AM Rajini Sivaram <
>> rajinisiva...@gmail.com>
> wrote:
> 
>> +1 (binding)
>> 
>> Thanks for the KIP!
>> 
>> Regards,
>> 
>> Rajini
>> 
>> 
>> On Tue, Jan 21, 2020 at 3:43 PM Николай Ижиков <
>> nizhi...@apache.org>
>> wrote:
>> 
>>> Hello.
>>> 
>>> I would like to start vote for KIP-553: Disable all SSL protocols
> except
>>> TLSV1.2 by default.
>>> 
>>> KIP -
>>> 
>> 
> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956
>>> Discussion thread -
>>> 
>> 
> 
>> https://lists.apache.org/thread.html/9c6201fe403a24f84fc3aa27f47dd06b718c1d80de0ee3412b9b877c%40%3Cdev.kafka.apache.org%3E
>> 
> 
>> 



[jira] [Created] (KAFKA-9460) Enable TLSv1.2 by default and disable all others protocol versions

2020-01-21 Thread Nikolay Izhikov (Jira)
Nikolay Izhikov created KAFKA-9460:
--

 Summary: Enable TLSv1.2 by default and disable all others protocol 
versions
 Key: KAFKA-9460
 URL: https://issues.apache.org/jira/browse/KAFKA-9460
 Project: Kafka
  Issue Type: Improvement
Reporter: Nikolay Izhikov
Assignee: Nikolay Izhikov


In KAFKA-7251 support of TLS1.3 was introduced.

For now, only TLS1.2 and TLS1.3 are recommended for the usage, other versions 
of TLS considered as obsolete:

https://www.rfc-editor.org/info/rfc8446
https://en.wikipedia.org/wiki/Transport_Layer_Security#History_and_development
But testing of TLS1.3 incomplete, for now.

We should enable actual versions of the TLS protocol by default to provide to 
the users only secure implementations.

Users can enable obsolete versions of the TLS with the configuration if they 
want to. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] KIP-553: Enable TLSv1.3 by default and disable all protocols except [TLSV1.2, TLSV1.3]

2019-12-24 Thread Nikolay Izhikov
Hello,

I'd like to start a discussion of KIP.
Its goal is to enable TLSv1.3 and disable obsolete versions by default.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=142641956

Your comments and suggestions are welcome.


Re: KAFKA-8584: Support of ByteBuffer for bytes field implemented[Convert Kafka RPCs to use automatically generated code]

2019-10-23 Thread Nikolay Izhikov
Hello.

This improvement merged.
Thanks Colin for the review.

В Пт, 18/10/2019 в 15:02 -0700, Colin McCabe пишет:
> Hi Nikolay,
> 
> Sorry that I haven't had more bandwidth to review this recently.  I will take 
> a look today.
> 
> In the future, can you please rebase your changes on top of trunk, rather 
> than merging trunk into your branch?  It is difficult to follow which changes 
> are yours and which come from the merge, when you do it the other way.
> 
> best,
> Colin
> 
> 
> On Thu, Oct 17, 2019, at 02:59, Nikolay Izhikov wrote:
> > Hello.
> > 
> > Is there something wrong with the PR?
> > Do we need this ticket to be done? [2]
> > If no, let's close both PR [1] and ticket.
> > 
> > The design or implementation details were changed?
> > If yes, can you, please, send a link where I can find details.
> > 
> > [1] https://github.com/apache/kafka/pull/7342
> > [2] https://issues.apache.org/jira/browse/KAFKA-8885
> > 
> > пн, 7 окт. 2019 г. в 10:08, Nikolay Izhikov :
> > 
> > > Hello.
> > > 
> > > Please, review my changes [1]
> > > I fixed all conflicts after KAFKA-8885 [2] merge [3].
> > > 
> > > [1] https://github.com/apache/kafka/pull/7342
> > > [2] https://issues.apache.org/jira/browse/KAFKA-8885
> > > [3]
> > > https://github.com/apache/kafka/commit/0de61a4683b92bdee803c51211c3277578ab3edf
> > > 
> > > В Пт, 20/09/2019 в 09:18 -0700, Colin McCabe пишет:
> > > > Hi Nikolay,
> > > > 
> > > > Thanks for working on this.  I think everyone agrees that we should have
> > > 
> > > byte buffer support in the generator.  We just haven't had a lot of time
> > > for reviewing it lately.   I don't really mind which PR we use :)  I will
> > > take a look at your PR today and see if we can get it into shape for what
> > > we need.
> > > > 
> > > > best,
> > > > Colin
> > > > 
> > > > On Fri, Sep 20, 2019, at 09:18, Nikolay Izhikov wrote:
> > > > > Hello, all.
> > > > > 
> > > > > Any feedback on this?
> > > > > Do we need support of ByteBuffer in RPC generated code?
> > > > > 
> > > > > Which PR should be reviwed and merged?
> > > > > 
> > > > > В Чт, 19/09/2019 в 10:11 +0300, Nikolay Izhikov пишет:
> > > > > > Hello, guys.
> > > > > > 
> > > > > > Looks like we have duplicate tickets and PR's here.
> > > > > > 
> > > > > > One from me:
> > > > > > 
> > > > > > KAFKA-8584: Support of ByteBuffer for bytes field implemented.
> > > > > > ticket - https://issues.apache.org/jira/browse/KAFKA-8584
> > > > > > pr - https://github.com/apache/kafka/pull/7342
> > > > > > 
> > > > > > and one from Colin McCabe:
> > > > > > 
> > > > > > KAFKA-8628: Auto-generated Kafka RPC code should be able to use
> > > 
> > > zero-copy ByteBuffers
> > > > > > ticket - https://issues.apache.org/jira/browse/KAFKA-8628
> > > > > > pr - https://github.com/apache/kafka/pull/7032
> > > > > > 
> > > > > > I want to continue work on my PR and got it merged.
> > > > > > But, it up to community to decide which changes are best for the
> > > 
> > > product.
> > > > > > 
> > > > > > Please, let me know, what do you think.
> > > > > > 
> > > > > > 
> > > > > > В Вт, 17/09/2019 в 01:52 +0300, Nikolay Izhikov пишет:
> > > > > > > Hello, Kafka team.
> > > > > > > 
> > > > > > > I implemented KAFKA-8584 [1].
> > > > > > > PR - [2]
> > > > > > > Please, do the review.
> > > > > > > 
> > > > > > > [1] https://issues.apache.org/jira/browse/KAFKA-8584
> > > > > > > [2] https://github.com/apache/kafka/pull/7342
> > > > > 
> > > > > Attachments:
> > > > > * signature.asc


signature.asc
Description: This is a digitally signed message part


Re: [VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-19 Thread Nikolay Izhikov
Hello.

This KIP adopted by commit - 
https://github.com/apache/kafka/commit/4e094217f7360becd3640a38587e25f9a3bfd4b3

Thanks Sophie and Matthias for the review and merge.
Thanks Guozhang, Bill, Matthias, Bruno for the votes and feedback.

В Ср, 09/10/2019 в 15:55 +0300, Nikolay Izhikov пишет:
> Thanks everyone!
> 
> After 3+ business days since this thread started, I'm concluding the vote
> on KIP-527.
> 
> The KIP has passed with:
> 
> 3 binding votes from Guzhang, Bill, Matthias.
> 1 non-binding vote from Bruno.
> 
> Thank you all for voting!
> 
> 
> В Вт, 08/10/2019 в 15:17 -0700, Guozhang Wang пишет:
> > Thanks, I'm +1 (binding).
> > 
> > On Tue, Oct 8, 2019 at 2:33 AM Nikolay Izhikov  wrote:
> > 
> > > Hello, Guozhang.
> > > 
> > > Following added to the KIP:
> > > 
> > > > If not null parameters passed then an java.lang.IllegalArgumentException
> > > 
> > > will be thrown.
> > > 
> > > В Пн, 07/10/2019 в 15:51 -0700, Guozhang Wang пишет:
> > > > Hello Nikolay,
> > > > 
> > > > Could you clarify in the wiki doc what would happen if the passed in
> > > > `byte[] bytes` or `Void data` parameters are not null? From the example
> > > 
> > > PR
> > > > we would throw exception here. I think it worth documenting it as part 
> > > > of
> > > > the contract in the wiki page.
> > > > 
> > > > Otherwise, I'm +1.
> > > > 
> > > > 
> > > > Guozhang
> > > > 
> > > > On Mon, Oct 7, 2019 at 3:19 PM Bill Bejeck  wrote:
> > > > 
> > > > > Thanks for the KIP.
> > > > > 
> > > > > +1(binding)
> > > > > 
> > > > > -Bill
> > > > > 
> > > > > On Mon, Oct 7, 2019 at 5:57 PM Matthias J. Sax 
> > > > > wrote:
> > > > > 
> > > > > > +1 (binding)
> > > > > > 
> > > > > > 
> > > > > > -Matthias
> > > > > > 
> > > > > > On 10/6/19 8:02 PM, Nikolay Izhikov wrote:
> > > > > > > Hello,
> > > > > > > 
> > > > > > > Any additional feedback on this?
> > > > > > > Do we need this in Kafka?
> > > > > > > 
> > > > > > > В Ср, 02/10/2019 в 08:30 +0200, Bruno Cadonna пишет:
> > > > > > > > Hi Nikolay,
> > > > > > > > 
> > > > > > > > Thank you for the KIP!
> > > > > > > > 
> > > > > > > > +1 (non-binding)
> > > > > > > > 
> > > > > > > > Best,
> > > > > > > > Bruno
> > > > > > > > 
> > > > > > > > On Tue, Oct 1, 2019 at 5:57 PM Nikolay Izhikov <
> > > 
> > > nizhi...@apache.org>
> > > > > > 
> > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > Hello.
> > > > > > > > > 
> > > > > > > > > I would like to start vote for KIP-527: Add VoidSerde to 
> > > > > > > > > Serdes
> > > > > > > > > 
> > > > > > > > > KIP -
> > > > > 
> > > > > 
> > > 
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > > > > > Discussion thread -
> > > > > 
> > > > > 
> > > 
> > > https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E
> > > > > > > > > 
> > > > > > 
> > > > > > 
> > > > 
> > > > 
> > 
> > 


signature.asc
Description: This is a digitally signed message part


Re: KAFKA-8584: Support of ByteBuffer for bytes field implemented[Convert Kafka RPCs to use automatically generated code]

2019-10-17 Thread Nikolay Izhikov
Hello.

Is there something wrong with the PR?
Do we need this ticket to be done? [2]
If no, let's close both PR [1] and ticket.

The design or implementation details were changed?
If yes, can you, please, send a link where I can find details.

[1] https://github.com/apache/kafka/pull/7342
[2] https://issues.apache.org/jira/browse/KAFKA-8885

пн, 7 окт. 2019 г. в 10:08, Nikolay Izhikov :

> Hello.
>
> Please, review my changes [1]
> I fixed all conflicts after KAFKA-8885 [2] merge [3].
>
> [1] https://github.com/apache/kafka/pull/7342
> [2] https://issues.apache.org/jira/browse/KAFKA-8885
> [3]
> https://github.com/apache/kafka/commit/0de61a4683b92bdee803c51211c3277578ab3edf
>
> В Пт, 20/09/2019 в 09:18 -0700, Colin McCabe пишет:
> > Hi Nikolay,
> >
> > Thanks for working on this.  I think everyone agrees that we should have
> byte buffer support in the generator.  We just haven't had a lot of time
> for reviewing it lately.   I don't really mind which PR we use :)  I will
> take a look at your PR today and see if we can get it into shape for what
> we need.
> >
> > best,
> > Colin
> >
> > On Fri, Sep 20, 2019, at 09:18, Nikolay Izhikov wrote:
> > > Hello, all.
> > >
> > > Any feedback on this?
> > > Do we need support of ByteBuffer in RPC generated code?
> > >
> > > Which PR should be reviwed and merged?
> > >
> > > В Чт, 19/09/2019 в 10:11 +0300, Nikolay Izhikov пишет:
> > > > Hello, guys.
> > > >
> > > > Looks like we have duplicate tickets and PR's here.
> > > >
> > > > One from me:
> > > >
> > > > KAFKA-8584: Support of ByteBuffer for bytes field implemented.
> > > > ticket - https://issues.apache.org/jira/browse/KAFKA-8584
> > > > pr - https://github.com/apache/kafka/pull/7342
> > > >
> > > > and one from Colin McCabe:
> > > >
> > > > KAFKA-8628: Auto-generated Kafka RPC code should be able to use
> zero-copy ByteBuffers
> > > > ticket - https://issues.apache.org/jira/browse/KAFKA-8628
> > > > pr - https://github.com/apache/kafka/pull/7032
> > > >
> > > > I want to continue work on my PR and got it merged.
> > > > But, it up to community to decide which changes are best for the
> product.
> > > >
> > > > Please, let me know, what do you think.
> > > >
> > > >
> > > > В Вт, 17/09/2019 в 01:52 +0300, Nikolay Izhikov пишет:
> > > > > Hello, Kafka team.
> > > > >
> > > > > I implemented KAFKA-8584 [1].
> > > > > PR - [2]
> > > > > Please, do the review.
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/KAFKA-8584
> > > > > [2] https://github.com/apache/kafka/pull/7342
> > >
> > > Attachments:
> > > * signature.asc
>


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-10-16 Thread Nikolay Izhikov
Hello.

I've got PR accepted by the Sophie Blee-Goldman.
Tests are green.

Please, others committers join the review.

чт, 10 окт. 2019 г. в 16:52, Nikolay Izhikov :

> Hello.
>
> This KIP was accepted.
>
> I created PR [1] for it.
> Please, review.
>
> [1] https://github.com/apache/kafka/pull/7485
>
>
> В Пн, 07/10/2019 в 14:56 -0700, Matthias J. Sax пишет:
> > Thanks,
> >
> > Overall LGTM. Can you maybe add the corresponding package name for the
> > new classes?
> >
> >
> >
> > -Matthias
> >
> > On 9/30/19 9:26 AM, Nikolay Izhikov wrote:
> > > Hello, Bruno.
> > >
> > > Thanks for feedback.
> > > KIP [1] updated according to your comments.
> > >
> > > [1]
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > >
> > > В Пн, 30/09/2019 в 16:51 +0200, Bruno Cadonna пишет:
> > > > Hi Nikolay,
> > > >
> > > > Thank you for the KIP.
> > > >
> > > > I have a couple of minor comments:
> > > >
> > > > 1. I would not put implementation details into the KIP as you did
> with
> > > > the bodies of the constructor of the `VoidSerde` and the `serialize`
> > > > and `deserialize` methods. IMO, the signatures suffice. The
> > > > implementation is then discussed on the PR.
> > > >
> > > > 2. I guess you mean that you want to add the `VoidSerde` to the
> > > > `Serdes` class when you say "I want to add VoidSerde to main SerDe
> > > > collection.". If my guess is right, then please be more specific and
> > > > mention the `Serdes` class there.
> > > >
> > > > 3. The rejected alternative in the KIP is rather a workaround than a
> > > > rejected alternative. IMO it would be better to instead list the
> > > > rejected names for the Serde there if anything.
> > > >
> > > > Best,
> > > > Bruno
> > > >
> > > > On Sat, Sep 28, 2019 at 1:42 PM Nikolay Izhikov 
> wrote:
> > > > >
> > > > > Hello.
> > > > >
> > > > > Any additional comments?
> > > > > Should I start a vote for this KIP?
> > > > >
> > > > > В Вт, 24/09/2019 в 16:20 +0300, Nikolay Izhikov пишет:
> > > > > > Hello,
> > > > > >
> > > > > > KIP [1] updated to VoidSerde.
> > > > > >
> > > > > > [1]
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > >
> > > > > >
> > > > > > В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> > > > > > > Ah it is!  +1 to VoidSerde
> > > > > > >
> > > > > > > On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax <
> matth...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Because the actually data type is `Void`, I am wondering if
> `VoidSerde`
> > > > > > > > might be a more descriptive name?
> > > > > > > >
> > > > > > > > -Matthias
> > > > > > > >
> > > > > > > > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > > > > > > > Hello, guys
> > > > > > > > >
> > > > > > > > > Any additional feeback on this KIP?
> > > > > > > > > Should I start a vote?
> > > > > > > > >
> > > > > > > > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > > > > > > > Hello, Andrew.
> > > > > > > > > >
> > > > > > > > > > OK, if nobody mind, let's change it to Null.
> > > > > > > > > >
> > > > > > > > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > > > > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov <
> nizhi...@apache.org>
> > > > > > > >
> > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hello, Andrew.
> > > > > > > > > > > >
> > > > > > > > > > > > Seems, usage null or nothing is matter of taste. I
> dont mind if we
> > > > > > > >
> > > > > > > > call it
> > > > > > > > > > > > NullSerde
> > > > > > > > > > > >
> > > > > > > > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto <
> o...@wikimedia.org>:
> > > > > > > > > > > >
> > > > > > > > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov <
> nizhi...@apache.org
> > > > > > > > > > > > > wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > All,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > I'd like to start a discussion for adding a
> NothingSerde to Serdes.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > >
> > > > > > > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > > > > > > >
> > > > > > > >
> > > > > > > >
> >
> >
>


Re: KAFKA-8104: Help with the review

2019-10-14 Thread Nikolay Izhikov
Hello, Guozhang.

Got it, thanks for the help with the PR.
Will wait for your review.

В Пн, 14/10/2019 в 13:40 -0700, Guozhang Wang пишет:
> Hello Nikolay,
> 
> I'm still on your PR, but was swamped with some other issues as the release
> code freeze date's approaching, will try to make another pass on it asap.
> 
> 
> Guozhang
> 
> On Mon, Oct 14, 2019 at 12:46 PM Nikolay Izhikov 
> wrote:
> 
> > Hello.
> > 
> > I got very helpfull advices from guozhang.
> > And now, we have a ready fix and reproducer.
> > 
> > This PR fixes a very long living Kafka Consumer bug.
> > Please, join to the review.
> > 
> > [1] https://issues.apache.org/jira/browse/KAFKA-8104
> > [2] https://github.com/apache/kafka/pull/7460
> > 
> > В Пн, 07/10/2019 в 21:37 +0300, Nikolay Izhikov пишет:
> > > Hello.
> > > 
> > > We have KAFKA-8104 "Consumer cannot rejoin to the group after
> > 
> > rebalancing" [1] issue.
> > > It reproduces on many production environments.
> > > 
> > > I prepared reproducer and fix [2] for this issue.
> > > But, I need assistance with the "fair" reproducer.
> > > 
> > > Please, help me with the review and "fair" reproducer:
> > > 
> > > PR contains the fix of race condition bug between "consumer thread" and
> > 
> > "consumer coordinator heartbeat thread". It reproduces in many production
> > environments.
> > > 
> > > Condition for reproducing:
> > > 
> > > 1. Consumer thread initiates rejoin to the group because of commit
> > 
> > timeout. Call of `AbstractCoordinator#joinGroupIfNeeded` which leads to
> > `sendJoinGroupRequest`.
> > > 2. `JoinGroupResponseHandler` writes to the
> > 
> > `AbstractCoordinator.this.generation` new generation data and leaves the`
> > synchronized` section.
> > > 3. Heartbeat thread executes `mabeLeaveGroup` and clears generation data
> > 
> > via `resetGenerationOnLeaveGroup`.
> > > 4. Consumer thread executes `onJoinComplete(generation.generationId,
> > 
> > generation.memberId, generation.protocol, memberAssignment);` with the
> > cleared generation data. This leads to the corresponding
> > > exception.
> > > 
> > > The race fixed with the condition in `maybeLeaveGroup`: if we have
> > 
> > ongoing rejoin process in consumer thread there is no reason to reset
> > generation data and send `LeaveGroupRequest` in heartbeat
> > > thread.
> > > 
> > > This PR contains unfair "reproducer".
> > > It implemented with the `CountDownLatch` that imitates described race in
> > 
> > `AbstractCoordinator` code.
> > > 
> > > 
> > > 
> > > [1] https://issues.apache.org/jira/browse/KAFKA-8104
> > > [2] https://github.com/apache/kafka/pull/7460
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: KAFKA-8104: Help with the review

2019-10-14 Thread Nikolay Izhikov
Hello.

I got very helpfull advices from guozhang.
And now, we have a ready fix and reproducer.

This PR fixes a very long living Kafka Consumer bug.
Please, join to the review.

[1] https://issues.apache.org/jira/browse/KAFKA-8104
[2] https://github.com/apache/kafka/pull/7460

В Пн, 07/10/2019 в 21:37 +0300, Nikolay Izhikov пишет:
> Hello.
> 
> We have KAFKA-8104 "Consumer cannot rejoin to the group after rebalancing" 
> [1] issue.
> It reproduces on many production environments.
> 
> I prepared reproducer and fix [2] for this issue.
> But, I need assistance with the "fair" reproducer.
> 
> Please, help me with the review and "fair" reproducer:
> 
> PR contains the fix of race condition bug between "consumer thread" and 
> "consumer coordinator heartbeat thread". It reproduces in many production 
> environments.
> 
> Condition for reproducing:
> 
> 1. Consumer thread initiates rejoin to the group because of commit timeout. 
> Call of `AbstractCoordinator#joinGroupIfNeeded` which leads to 
> `sendJoinGroupRequest`.
> 2. `JoinGroupResponseHandler` writes to the 
> `AbstractCoordinator.this.generation` new generation data and leaves the` 
> synchronized` section.
> 3. Heartbeat thread executes `mabeLeaveGroup` and clears generation data via 
> `resetGenerationOnLeaveGroup`.
> 4. Consumer thread executes `onJoinComplete(generation.generationId, 
> generation.memberId, generation.protocol, memberAssignment);` with the 
> cleared generation data. This leads to the corresponding
> exception.
> 
> The race fixed with the condition in `maybeLeaveGroup`: if we have ongoing 
> rejoin process in consumer thread there is no reason to reset generation data 
> and send `LeaveGroupRequest` in heartbeat
> thread.
> 
> This PR contains unfair "reproducer".
> It implemented with the `CountDownLatch` that imitates described race in 
> `AbstractCoordinator` code.
> 
> 
> 
> [1] https://issues.apache.org/jira/browse/KAFKA-8104
> [2] https://github.com/apache/kafka/pull/7460


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-10-10 Thread Nikolay Izhikov
Hello.

This KIP was accepted.

I created PR [1] for it. 
Please, review.

[1] https://github.com/apache/kafka/pull/7485


В Пн, 07/10/2019 в 14:56 -0700, Matthias J. Sax пишет:
> Thanks,
> 
> Overall LGTM. Can you maybe add the corresponding package name for the
> new classes?
> 
> 
> 
> -Matthias
> 
> On 9/30/19 9:26 AM, Nikolay Izhikov wrote:
> > Hello, Bruno.
> > 
> > Thanks for feedback.
> > KIP [1] updated according to your comments.
> > 
> > [1] 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > 
> > В Пн, 30/09/2019 в 16:51 +0200, Bruno Cadonna пишет:
> > > Hi Nikolay,
> > > 
> > > Thank you for the KIP.
> > > 
> > > I have a couple of minor comments:
> > > 
> > > 1. I would not put implementation details into the KIP as you did with
> > > the bodies of the constructor of the `VoidSerde` and the `serialize`
> > > and `deserialize` methods. IMO, the signatures suffice. The
> > > implementation is then discussed on the PR.
> > > 
> > > 2. I guess you mean that you want to add the `VoidSerde` to the
> > > `Serdes` class when you say "I want to add VoidSerde to main SerDe
> > > collection.". If my guess is right, then please be more specific and
> > > mention the `Serdes` class there.
> > > 
> > > 3. The rejected alternative in the KIP is rather a workaround than a
> > > rejected alternative. IMO it would be better to instead list the
> > > rejected names for the Serde there if anything.
> > > 
> > > Best,
> > > Bruno
> > > 
> > > On Sat, Sep 28, 2019 at 1:42 PM Nikolay Izhikov  
> > > wrote:
> > > > 
> > > > Hello.
> > > > 
> > > > Any additional comments?
> > > > Should I start a vote for this KIP?
> > > > 
> > > > В Вт, 24/09/2019 в 16:20 +0300, Nikolay Izhikov пишет:
> > > > > Hello,
> > > > > 
> > > > > KIP [1] updated to VoidSerde.
> > > > > 
> > > > > [1] 
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > 
> > > > > 
> > > > > В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> > > > > > Ah it is!  +1 to VoidSerde
> > > > > > 
> > > > > > On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax 
> > > > > > 
> > > > > > wrote:
> > > > > > 
> > > > > > > Because the actually data type is `Void`, I am wondering if 
> > > > > > > `VoidSerde`
> > > > > > > might be a more descriptive name?
> > > > > > > 
> > > > > > > -Matthias
> > > > > > > 
> > > > > > > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > > > > > > Hello, guys
> > > > > > > > 
> > > > > > > > Any additional feeback on this KIP?
> > > > > > > > Should I start a vote?
> > > > > > > > 
> > > > > > > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > > > > > > Hello, Andrew.
> > > > > > > > > 
> > > > > > > > > OK, if nobody mind, let's change it to Null.
> > > > > > > > > 
> > > > > > > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > > > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > > > > > > 
> > > > > > > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov 
> > > > > > > > > > 
> > > > > > > 
> > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > > Hello, Andrew.
> > > > > > > > > > > 
> > > > > > > > > > > Seems, usage null or nothing is matter of taste. I dont 
> > > > > > > > > > > mind if we
> > > > > > > 
> > > > > > > call it
> > > > > > > > > > > NullSerde
> > > > > > > > > > > 
> > > > > > > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto 
> > > > > > > > > > > :
> > > > > > > > > > > 
> > > > > > > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > > > > > > 
> > > > > > > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > > > > > > > > >  > > > > > > > > > > > wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > > All,
> > > > > > > > > > > > > 
> > > > > > > > > > > > > I'd like to start a discussion for adding a 
> > > > > > > > > > > > > NothingSerde to Serdes.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > 
> > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > > > > > > 
> > > > > > > 
> > > > > > > 
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-09 Thread Nikolay Izhikov
Thanks everyone!

After 3+ business days since this thread started, I'm concluding the vote
on KIP-527.

The KIP has passed with:

3 binding votes from Guzhang, Bill, Matthias.
1 non-binding vote from Bruno.

Thank you all for voting!


В Вт, 08/10/2019 в 15:17 -0700, Guozhang Wang пишет:
> Thanks, I'm +1 (binding).
> 
> On Tue, Oct 8, 2019 at 2:33 AM Nikolay Izhikov  wrote:
> 
> > Hello, Guozhang.
> > 
> > Following added to the KIP:
> > 
> > > If not null parameters passed then an java.lang.IllegalArgumentException
> > 
> > will be thrown.
> > 
> > В Пн, 07/10/2019 в 15:51 -0700, Guozhang Wang пишет:
> > > Hello Nikolay,
> > > 
> > > Could you clarify in the wiki doc what would happen if the passed in
> > > `byte[] bytes` or `Void data` parameters are not null? From the example
> > 
> > PR
> > > we would throw exception here. I think it worth documenting it as part of
> > > the contract in the wiki page.
> > > 
> > > Otherwise, I'm +1.
> > > 
> > > 
> > > Guozhang
> > > 
> > > On Mon, Oct 7, 2019 at 3:19 PM Bill Bejeck  wrote:
> > > 
> > > > Thanks for the KIP.
> > > > 
> > > > +1(binding)
> > > > 
> > > > -Bill
> > > > 
> > > > On Mon, Oct 7, 2019 at 5:57 PM Matthias J. Sax 
> > > > wrote:
> > > > 
> > > > > +1 (binding)
> > > > > 
> > > > > 
> > > > > -Matthias
> > > > > 
> > > > > On 10/6/19 8:02 PM, Nikolay Izhikov wrote:
> > > > > > Hello,
> > > > > > 
> > > > > > Any additional feedback on this?
> > > > > > Do we need this in Kafka?
> > > > > > 
> > > > > > В Ср, 02/10/2019 в 08:30 +0200, Bruno Cadonna пишет:
> > > > > > > Hi Nikolay,
> > > > > > > 
> > > > > > > Thank you for the KIP!
> > > > > > > 
> > > > > > > +1 (non-binding)
> > > > > > > 
> > > > > > > Best,
> > > > > > > Bruno
> > > > > > > 
> > > > > > > On Tue, Oct 1, 2019 at 5:57 PM Nikolay Izhikov <
> > 
> > nizhi...@apache.org>
> > > > > 
> > > > > wrote:
> > > > > > > > 
> > > > > > > > Hello.
> > > > > > > > 
> > > > > > > > I would like to start vote for KIP-527: Add VoidSerde to Serdes
> > > > > > > > 
> > > > > > > > KIP -
> > > > 
> > > > 
> > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > > > > Discussion thread -
> > > > 
> > > > 
> > 
> > https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E
> > > > > > > > 
> > > > > 
> > > > > 
> > > 
> > > 
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-08 Thread Nikolay Izhikov
Hello, Guozhang.

Following added to the KIP:

> If not null parameters passed then an java.lang.IllegalArgumentException will 
> be thrown.

В Пн, 07/10/2019 в 15:51 -0700, Guozhang Wang пишет:
> Hello Nikolay,
> 
> Could you clarify in the wiki doc what would happen if the passed in
> `byte[] bytes` or `Void data` parameters are not null? From the example PR
> we would throw exception here. I think it worth documenting it as part of
> the contract in the wiki page.
> 
> Otherwise, I'm +1.
> 
> 
> Guozhang
> 
> On Mon, Oct 7, 2019 at 3:19 PM Bill Bejeck  wrote:
> 
> > Thanks for the KIP.
> > 
> > +1(binding)
> > 
> > -Bill
> > 
> > On Mon, Oct 7, 2019 at 5:57 PM Matthias J. Sax 
> > wrote:
> > 
> > > +1 (binding)
> > > 
> > > 
> > > -Matthias
> > > 
> > > On 10/6/19 8:02 PM, Nikolay Izhikov wrote:
> > > > Hello,
> > > > 
> > > > Any additional feedback on this?
> > > > Do we need this in Kafka?
> > > > 
> > > > В Ср, 02/10/2019 в 08:30 +0200, Bruno Cadonna пишет:
> > > > > Hi Nikolay,
> > > > > 
> > > > > Thank you for the KIP!
> > > > > 
> > > > > +1 (non-binding)
> > > > > 
> > > > > Best,
> > > > > Bruno
> > > > > 
> > > > > On Tue, Oct 1, 2019 at 5:57 PM Nikolay Izhikov 
> > > 
> > > wrote:
> > > > > > 
> > > > > > Hello.
> > > > > > 
> > > > > > I would like to start vote for KIP-527: Add VoidSerde to Serdes
> > > > > > 
> > > > > > KIP -
> > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > > > > Discussion thread -
> > 
> > https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E
> > > > > > 
> > > 
> > > 
> 
> 


signature.asc
Description: This is a digitally signed message part


KAFKA-8104: Help with the fair reproducer and review

2019-10-07 Thread Nikolay Izhikov
Hello.

We have KAFKA-8104 "Consumer cannot rejoin to the group after rebalancing" [1] 
issue.
It reproduces on many production environments.

I prepared reproducer and fix [2] for this issue.
But, I need assistance with the "fair" reproducer.

Please, help me with the review and "fair" reproducer:

PR contains the fix of race condition bug between "consumer thread" and 
"consumer coordinator heartbeat thread". It reproduces in many production 
environments.

Condition for reproducing:

1. Consumer thread initiates rejoin to the group because of commit timeout. 
Call of `AbstractCoordinator#joinGroupIfNeeded` which leads to 
`sendJoinGroupRequest`.
2. `JoinGroupResponseHandler` writes to the 
`AbstractCoordinator.this.generation` new generation data and leaves the` 
synchronized` section.
3. Heartbeat thread executes `mabeLeaveGroup` and clears generation data via 
`resetGenerationOnLeaveGroup`.
4. Consumer thread executes `onJoinComplete(generation.generationId, 
generation.memberId, generation.protocol, memberAssignment);` with the cleared 
generation data. This leads to the corresponding
exception.

The race fixed with the condition in `maybeLeaveGroup`: if we have ongoing 
rejoin process in consumer thread there is no reason to reset generation data 
and send `LeaveGroupRequest` in heartbeat
thread.

This PR contains unfair "reproducer".
It implemented with the `CountDownLatch` that imitates described race in 
`AbstractCoordinator` code.



[1] https://issues.apache.org/jira/browse/KAFKA-8104
[2] https://github.com/apache/kafka/pull/7460


signature.asc
Description: This is a digitally signed message part


Re: KAFKA-8584: Support of ByteBuffer for bytes field implemented[Convert Kafka RPCs to use automatically generated code]

2019-10-07 Thread Nikolay Izhikov
Hello.

Please, review my changes [1]
I fixed all conflicts after KAFKA-8885 [2] merge [3].

[1] https://github.com/apache/kafka/pull/7342
[2] https://issues.apache.org/jira/browse/KAFKA-8885
[3] 
https://github.com/apache/kafka/commit/0de61a4683b92bdee803c51211c3277578ab3edf

В Пт, 20/09/2019 в 09:18 -0700, Colin McCabe пишет:
> Hi Nikolay,
> 
> Thanks for working on this.  I think everyone agrees that we should have byte 
> buffer support in the generator.  We just haven't had a lot of time for 
> reviewing it lately.   I don't really mind which PR we use :)  I will take a 
> look at your PR today and see if we can get it into shape for what we need.
> 
> best,
> Colin
> 
> On Fri, Sep 20, 2019, at 09:18, Nikolay Izhikov wrote:
> > Hello, all.
> > 
> > Any feedback on this?
> > Do we need support of ByteBuffer in RPC generated code?
> > 
> > Which PR should be reviwed and merged?
> > 
> > В Чт, 19/09/2019 в 10:11 +0300, Nikolay Izhikov пишет:
> > > Hello, guys.
> > > 
> > > Looks like we have duplicate tickets and PR's here.
> > > 
> > > One from me:
> > > 
> > > KAFKA-8584: Support of ByteBuffer for bytes field implemented.
> > > ticket - https://issues.apache.org/jira/browse/KAFKA-8584
> > > pr - https://github.com/apache/kafka/pull/7342
> > > 
> > > and one from Colin McCabe:
> > > 
> > > KAFKA-8628: Auto-generated Kafka RPC code should be able to use zero-copy 
> > > ByteBuffers
> > > ticket - https://issues.apache.org/jira/browse/KAFKA-8628
> > > pr - https://github.com/apache/kafka/pull/7032
> > > 
> > > I want to continue work on my PR and got it merged.
> > > But, it up to community to decide which changes are best for the product.
> > > 
> > > Please, let me know, what do you think.
> > > 
> > > 
> > > В Вт, 17/09/2019 в 01:52 +0300, Nikolay Izhikov пишет:
> > > > Hello, Kafka team.
> > > > 
> > > > I implemented KAFKA-8584 [1].
> > > > PR - [2]
> > > > Please, do the review.
> > > > 
> > > > [1] https://issues.apache.org/jira/browse/KAFKA-8584
> > > > [2] https://github.com/apache/kafka/pull/7342
> > 
> > Attachments:
> > * signature.asc


signature.asc
Description: This is a digitally signed message part


Re: [VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-06 Thread Nikolay Izhikov
Hello, 

Any additional feedback on this?
Do we need this in Kafka?

В Ср, 02/10/2019 в 08:30 +0200, Bruno Cadonna пишет:
> Hi Nikolay,
> 
> Thank you for the KIP!
> 
> +1 (non-binding)
> 
> Best,
> Bruno
> 
> On Tue, Oct 1, 2019 at 5:57 PM Nikolay Izhikov  wrote:
> > 
> > Hello.
> > 
> > I would like to start vote for KIP-527: Add VoidSerde to Serdes
> > 
> > KIP - 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > Discussion thread - 
> > https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E
> > 


signature.asc
Description: This is a digitally signed message part


[VOTE] KIP-527: Add VoidSerde to Serdes

2019-10-01 Thread Nikolay Izhikov
Hello.

I would like to start vote for KIP-527: Add VoidSerde to Serdes

KIP - 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
Discussion thread - 
https://lists.apache.org/thread.html/e6f95799898cc5d6e7d44dfd3fc2206117feb384a0a229a1c781ecd4@%3Cdev.kafka.apache.org%3E



signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-30 Thread Nikolay Izhikov
Hello, Bruno.

Thanks for feedback.
KIP [1] updated according to your comments.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes

В Пн, 30/09/2019 в 16:51 +0200, Bruno Cadonna пишет:
> Hi Nikolay,
> 
> Thank you for the KIP.
> 
> I have a couple of minor comments:
> 
> 1. I would not put implementation details into the KIP as you did with
> the bodies of the constructor of the `VoidSerde` and the `serialize`
> and `deserialize` methods. IMO, the signatures suffice. The
> implementation is then discussed on the PR.
> 
> 2. I guess you mean that you want to add the `VoidSerde` to the
> `Serdes` class when you say "I want to add VoidSerde to main SerDe
> collection.". If my guess is right, then please be more specific and
> mention the `Serdes` class there.
> 
> 3. The rejected alternative in the KIP is rather a workaround than a
> rejected alternative. IMO it would be better to instead list the
> rejected names for the Serde there if anything.
> 
> Best,
> Bruno
> 
> On Sat, Sep 28, 2019 at 1:42 PM Nikolay Izhikov  wrote:
> > 
> > Hello.
> > 
> > Any additional comments?
> > Should I start a vote for this KIP?
> > 
> > В Вт, 24/09/2019 в 16:20 +0300, Nikolay Izhikov пишет:
> > > Hello,
> > > 
> > > KIP [1] updated to VoidSerde.
> > > 
> > > [1] 
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> > > 
> > > 
> > > В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> > > > Ah it is!  +1 to VoidSerde
> > > > 
> > > > On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax 
> > > > wrote:
> > > > 
> > > > > Because the actually data type is `Void`, I am wondering if 
> > > > > `VoidSerde`
> > > > > might be a more descriptive name?
> > > > > 
> > > > > -Matthias
> > > > > 
> > > > > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > > > > Hello, guys
> > > > > > 
> > > > > > Any additional feeback on this KIP?
> > > > > > Should I start a vote?
> > > > > > 
> > > > > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > > > > Hello, Andrew.
> > > > > > > 
> > > > > > > OK, if nobody mind, let's change it to Null.
> > > > > > > 
> > > > > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > > > > 
> > > > > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov 
> > > > > > > > 
> > > > > 
> > > > > wrote:
> > > > > > > > 
> > > > > > > > > Hello, Andrew.
> > > > > > > > > 
> > > > > > > > > Seems, usage null or nothing is matter of taste. I dont mind 
> > > > > > > > > if we
> > > > > 
> > > > > call it
> > > > > > > > > NullSerde
> > > > > > > > > 
> > > > > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > > > > > > > > 
> > > > > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > > > > 
> > > > > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > > > > > > >  > > > > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > > All,
> > > > > > > > > > > 
> > > > > > > > > > > I'd like to start a discussion for adding a NothingSerde 
> > > > > > > > > > > to Serdes.
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > 
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > > > > 
> > > > > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > > > > 
> > > > > 
> > > > > 


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-28 Thread Nikolay Izhikov
Hello.

Any additional comments?
Should I start a vote for this KIP?

В Вт, 24/09/2019 в 16:20 +0300, Nikolay Izhikov пишет:
> Hello,
> 
> KIP [1] updated to VoidSerde.
> 
> [1] 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes
> 
> 
> В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> > Ah it is!  +1 to VoidSerde
> > 
> > On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax 
> > wrote:
> > 
> > > Because the actually data type is `Void`, I am wondering if `VoidSerde`
> > > might be a more descriptive name?
> > > 
> > > -Matthias
> > > 
> > > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > > Hello, guys
> > > > 
> > > > Any additional feeback on this KIP?
> > > > Should I start a vote?
> > > > 
> > > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > > Hello, Andrew.
> > > > > 
> > > > > OK, if nobody mind, let's change it to Null.
> > > > > 
> > > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > > 
> > > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov 
> > > > > > 
> > > 
> > > wrote:
> > > > > > 
> > > > > > > Hello, Andrew.
> > > > > > > 
> > > > > > > Seems, usage null or nothing is matter of taste. I dont mind if we
> > > 
> > > call it
> > > > > > > NullSerde
> > > > > > > 
> > > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > > > > > > 
> > > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > > 
> > > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > > > > >  > > > > > > > wrote:
> > > > > > > > 
> > > > > > > > > All,
> > > > > > > > > 
> > > > > > > > > I'd like to start a discussion for adding a NothingSerde to 
> > > > > > > > > Serdes.
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > 
> > > > > > > 
> > > 
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > > 
> > > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > > 
> > > 
> > > 


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-24 Thread Nikolay Izhikov
Hello,

KIP [1] updated to VoidSerde.

[1] 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+VoidSerde+to+Serdes


В Вт, 24/09/2019 в 09:11 -0400, Andrew Otto пишет:
> Ah it is!  +1 to VoidSerde
> 
> On Mon, Sep 23, 2019 at 11:25 PM Matthias J. Sax 
> wrote:
> 
> > Because the actually data type is `Void`, I am wondering if `VoidSerde`
> > might be a more descriptive name?
> > 
> > -Matthias
> > 
> > On 9/23/19 12:25 PM, Nikolay Izhikov wrote:
> > > Hello, guys
> > > 
> > > Any additional feeback on this KIP?
> > > Should I start a vote?
> > > 
> > > В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> > > > Hello, Andrew.
> > > > 
> > > > OK, if nobody mind, let's change it to Null.
> > > > 
> > > > В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > > > > NullSerdes seems more descriptive, but up to you!  :)
> > > > > 
> > > > > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov 
> > 
> > wrote:
> > > > > 
> > > > > > Hello, Andrew.
> > > > > > 
> > > > > > Seems, usage null or nothing is matter of taste. I dont mind if we
> > 
> > call it
> > > > > > NullSerde
> > > > > > 
> > > > > > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > > > > > 
> > > > > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > > > > 
> > > > > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > > > >  > > > > > > wrote:
> > > > > > > 
> > > > > > > > All,
> > > > > > > > 
> > > > > > > > I'd like to start a discussion for adding a NothingSerde to 
> > > > > > > > Serdes.
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > 
> > > > > > 
> > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > > > > 
> > > > > > > > Your comments and suggestions are welcome.
> > > > > > > > 
> > 
> > 


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-23 Thread Nikolay Izhikov
Hello, guys

Any additional feeback on this KIP?
Should I start a vote?

В Пт, 20/09/2019 в 08:52 +0300, Nikolay Izhikov пишет:
> Hello, Andrew.
> 
> OK, if nobody mind, let's change it to Null.
> 
> В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> > NullSerdes seems more descriptive, but up to you!  :)
> > 
> > On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov  wrote:
> > 
> > > Hello, Andrew.
> > > 
> > > Seems, usage null or nothing is matter of taste. I dont mind if we call it
> > > NullSerde
> > > 
> > > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > > 
> > > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > > 
> > > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > > wrote:
> > > > 
> > > > > All,
> > > > > 
> > > > > I'd like to start a discussion for adding a NothingSerde to Serdes.
> > > > > 
> > > > > 
> > > > > 
> > > 
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > > 
> > > > > Your comments and suggestions are welcome.
> > > > > 


signature.asc
Description: This is a digitally signed message part


Re: KAFKA-8584: Support of ByteBuffer for bytes field implemented[Convert Kafka RPCs to use automatically generated code]

2019-09-20 Thread Nikolay Izhikov
Hello, all.

Any feedback on this?
Do we need support of ByteBuffer in RPC generated code?

Which PR should be reviwed and merged?

В Чт, 19/09/2019 в 10:11 +0300, Nikolay Izhikov пишет:
> Hello, guys.
> 
> Looks like we have duplicate tickets and PR's here.
> 
> One from me:
> 
> KAFKA-8584: Support of ByteBuffer for bytes field implemented.
> ticket - https://issues.apache.org/jira/browse/KAFKA-8584
> pr - https://github.com/apache/kafka/pull/7342
> 
> and one from Colin McCabe:
> 
> KAFKA-8628: Auto-generated Kafka RPC code should be able to use zero-copy 
> ByteBuffers
> ticket - https://issues.apache.org/jira/browse/KAFKA-8628
> pr - https://github.com/apache/kafka/pull/7032
> 
> I want to continue work on my PR and got it merged.
> But, it up to community to decide which changes are best for the product.
> 
> Please, let me know, what do you think.
> 
> 
> В Вт, 17/09/2019 в 01:52 +0300, Nikolay Izhikov пишет:
> > Hello, Kafka team.
> > 
> > I implemented KAFKA-8584 [1].
> > PR - [2]
> > Please, do the review.
> > 
> > [1] https://issues.apache.org/jira/browse/KAFKA-8584
> > [2] https://github.com/apache/kafka/pull/7342


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-19 Thread Nikolay Izhikov
Hello, Andrew.

OK, if nobody mind, let's change it to Null.

В Чт, 19/09/2019 в 13:54 -0400, Andrew Otto пишет:
> NullSerdes seems more descriptive, but up to you!  :)
> 
> On Thu, Sep 19, 2019 at 1:37 PM Nikolay Izhikov  wrote:
> 
> > Hello, Andrew.
> > 
> > Seems, usage null or nothing is matter of taste. I dont mind if we call it
> > NullSerde
> > 
> > чт, 19 сент. 2019 г., 20:28 Andrew Otto :
> > 
> > > Why 'NothingSerdes' instead of 'NullSerdes'?
> > > 
> > > On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> > > wrote:
> > > 
> > > > All,
> > > > 
> > > > I'd like to start a discussion for adding a NothingSerde to Serdes.
> > > > 
> > > > 
> > > > 
> > 
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> > > > 
> > > > Your comments and suggestions are welcome.
> > > > 


signature.asc
Description: This is a digitally signed message part


Re: [DISCUSS] KIP-527: Add NothingSerde to Serdes

2019-09-19 Thread Nikolay Izhikov
Hello, Andrew.

Seems, usage null or nothing is matter of taste. I dont mind if we call it
NullSerde

чт, 19 сент. 2019 г., 20:28 Andrew Otto :

> Why 'NothingSerdes' instead of 'NullSerdes'?
>
> On Thu, Sep 19, 2019 at 1:10 PM Nikolay Izhikov 
> wrote:
>
> > All,
> >
> > I'd like to start a discussion for adding a NothingSerde to Serdes.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-527%3A+Add+NothingSerde+to+Serdes
> >
> > Your comments and suggestions are welcome.
> >
>


  1   2   >