[jira] [Created] (KAFKA-10458) Need a way to update quota for TokenBucket registered with Sensor

2020-09-02 Thread Anna Povzner (Jira)
Anna Povzner created KAFKA-10458:


 Summary: Need a way to update quota for TokenBucket registered 
with Sensor
 Key: KAFKA-10458
 URL: https://issues.apache.org/jira/browse/KAFKA-10458
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Reporter: Anna Povzner
Assignee: Anna Povzner
 Fix For: 2.7.0


For Rate() metric with quota config, we update quota by updating config of 
KafkaMetric. However, it is not enough for TokenBucket, because it uses quota 
config on record() to properly calculate the number of tokens. Sensor passes 
config stored in the corresponding StatAndConfig, which currently never 
changes. This means that after updating quota via KafkaMetric.config, our 
current and only method, Sensor will record the value using old quota but then 
measure the value to check for quota violation using the new quota value. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-656: MirrorMaker2 Exactly-once Semantics

2020-09-02 Thread Ning Zhang
bump for another potential more discussion

On 2020/08/27 23:31:38, Ning Zhang  wrote: 
> Hello Mickael,
> 
> > 1. How does offset translation work with this new sink connector?
> > Should we also include a CheckpointSinkConnector?
> 
> CheckpointSourceConnector will be re-used as the same as current. When EOS is 
> enabled, we will run 3 connectors:
> 
> MirrorSinkConnector (based on SinkConnector)
> MirrorCheckpointConnector (based on SourceConnector)
> MirrorHeartbeatConnector (based on SourceConnector)
> 
> For the last two connectors (checkpoint, heartbeat), if we do not strictly 
> require EOS, it is probably OK to use current implementation on 
> SourceConnector.
> 
> I will update the KIP to clarify this, if it sounds acceptable.
> 
> > 2. Migrating to this new connector could be tricky as effectively the
> > Connect runtime needs to point to the other cluster, so its state
> > (stored in the __connect topics) is lost. Unfortunately there is no
> > easy way today to prime Connect with offsets. Not necessarily a
> > blocking issue but this should be described as I think the current
> > Migration section looks really optimistic at the moment
> 
> totally agree. I will update the migration part with notes about potential 
> service interruption, without careful planning.
> 
> > 3. We can probably find a better name than "transaction.producer".
> > Maybe we can follow a similar pattern than Streams (which uses
> > "processing.guarantee")?
> 
> "processing.guarantee" sounds better
> 
> > 4. Transactional Ids used by the producer are generated based on the
> > task assignments. If there's a single task, if it crashes and restarts
> > it would still get the same id. Can this be an issue?
> 
> From https://tgrez.github.io/posts/2019-04-13-kafka-transactions.html, the 
> author suggests to postfix transaction.id with :
> 
> "To avoid handling an external store we will use a static encoding similarly 
> as in spring-kafka:
> The transactional.id is now the transactionIdPrefix appended with 
> ..."
> 
> I think as long as there is no more than one producer use same 
> "transaction.id" at the same time, it is OK. 
> 
> Also from my tests, this "transaction.id" assignment works fine with 
> failures. To tighten it up, I also tested to use  "connector task id" in 
> "transaction.id". The "connector task id" is typically composed of 
> connector_name and task_id, which is also unique across all connectors in a 
> KC cluster.
> 
>  > 5. The logic in the KIP creates a new transaction every time put() is
> > called. Is there a performance impact?
> 
> It could be a performance hit if the transaction batch is too small under 
> high ingestion rate. The batch size depends on how many messages that 
> consumer poll each time. Maybe we could increase "max.poll.records" to have 
> larger batch size.
> 
> Overall, thanks so much for the valuable feedback. If the responses sounds 
> good, I will do a cleanup of KIP.
> 
> On 2020/08/27 09:59:57, Mickael Maison  wrote: 
> > Thanks Ning for the KIP. Having stronger guarantees when mirroring
> > data would be a nice improvement!
> > 
> > A few comments:
> > 1. How does offset translation work with this new sink connector?
> > Should we also include a CheckpointSinkConnector?
> > 
> > 2. Migrating to this new connector could be tricky as effectively the
> > Connect runtime needs to point to the other cluster, so its state
> > (stored in the __connect topics) is lost. Unfortunately there is no
> > easy way today to prime Connect with offsets. Not necessarily a
> > blocking issue but this should be described as I think the current
> > Migration section looks really optimistic at the moment
> > 
> > 3. We can probably find a better name than "transaction.producer".
> > Maybe we can follow a similar pattern than Streams (which uses
> > "processing.guarantee")?
> > 
> > 4. Transactional Ids used by the producer are generated based on the
> > task assignments. If there's a single task, if it crashes and restarts
> > it would still get the same id. Can this be an issue?
> > 
> > 5. The logic in the KIP creates a new transaction every time put() is
> > called. Is there a performance impact?
> > 
> > On Fri, Aug 21, 2020 at 4:58 PM Ryanne Dolan  wrote:
> > >
> > > Awesome, this will be a huge advancement. I also want to point out that
> > > this KIP implements MirrorSinkConnector as well, finally, which is a very
> > > often requested missing feature in my experience.
> > >
> > > Ryanne
> > >
> > > On Fri, Aug 21, 2020, 9:45 AM Ning Zhang  wrote:
> > >
> > > > Hello, I wrote a KIP about MirrorMaker2 Exactly-once Semantics (EOS)
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-656%3A+MirrorMaker2+Exactly-once+Semantics
> > > > At the high-level, it resembles the idea of how HDFS Sink Connector
> > > > achieves EOS across clusters by managing and storing the consumer 
> > > > offsets
> > > > in an external persistent storage, but also leverages the current Kafka 
> > > > EOS

Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-09-02 Thread Sagar
Thanks All!

I see 3 binding +1 votes and 2 non-binding +1s. Does it mean this KIP has
gained a lazy majority?

Thanks!
Sagar.

On Thu, Sep 3, 2020 at 6:51 AM Guozhang Wang  wrote:

> Thanks for the KIP Sagar. I'm +1 (binding) too.
>
>
> Guozhang
>
> On Tue, Sep 1, 2020 at 1:24 PM Bill Bejeck  wrote:
>
> > Thanks for the KIP! This is a great addition to the streams API.
> >
> > +1 (binding)
> >
> > -Bill
> >
> > On Tue, Sep 1, 2020 at 12:33 PM Sagar  wrote:
> >
> > > Hi All,
> > >
> > > Bumping the thread again !
> > >
> > > Thanks!
> > > Sagar.
> > >
> > > On Wed, Aug 5, 2020 at 12:08 AM Sophie Blee-Goldman <
> sop...@confluent.io
> > >
> > > wrote:
> > >
> > > > Thanks Sagar! +1 (non-binding)
> > > >
> > > > Sophie
> > > >
> > > > On Sun, Aug 2, 2020 at 11:37 PM Sagar 
> > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Just thought of bumping this voting thread again to see if we can
> > form
> > > > any
> > > > > consensus around this.
> > > > >
> > > > > Thanks!
> > > > > Sagar.
> > > > >
> > > > >
> > > > > On Mon, Jul 20, 2020 at 4:21 AM Adam Bellemare <
> > > adam.bellem...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > LGTM
> > > > > > +1 non-binding
> > > > > >
> > > > > > On Sun, Jul 19, 2020 at 4:13 AM Sagar  >
> > > > wrote:
> > > > > >
> > > > > > > Hi All,
> > > > > > >
> > > > > > > Bumping this thread to see if there are any feedbacks.
> > > > > > >
> > > > > > > Thanks!
> > > > > > > Sagar.
> > > > > > >
> > > > > > > On Tue, Jul 14, 2020 at 9:49 AM John Roesler <
> > vvcep...@apache.org>
> > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks for the KIP, Sagar!
> > > > > > > >
> > > > > > > > I’m +1 (binding)
> > > > > > > >
> > > > > > > > -John
> > > > > > > >
> > > > > > > > On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> > > > > > > > > Hi All,
> > > > > > > > >
> > > > > > > > > I would like to start a new voting thread for the below KIP
> > to
> > > > add
> > > > > > > prefix
> > > > > > > > > scan support to state stores:
> > > > > > > > >
> > > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > > 614%3A+Add+Prefix+Scan+support+for+State+Stores
> > > > > > > > > <
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks!
> > > > > > > > > Sagar.
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
> --
> -- Guozhang
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #44

2020-09-02 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #43

2020-09-02 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #41

2020-09-02 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2020-09-02 Thread John Roesler
Thanks, Leah,

This sounds good!

-John

On Wed, Sep 2, 2020, at 19:23, Matthias J. Sax wrote:
> Thanks for the KIP and the detailed discussion. I guess this all makes
> sense.
> 
> -Matthias
> 
> On 9/2/20 1:28 PM, Leah Thomas wrote:
> > Hey John,
> > 
> > I see what you say about the console consumer in particular. I don't think
> > that adding the extra config would *hurt* at all, so I'm good with keeping
> > that in the KIP. I re-updated the KIP proposal to include the configs.
> > 
> > The serde resolution sounds good to me as well, I added a few lines in the
> > KIP about logging an error when the *timeWindowedSerde *implicit is called.
> > 
> > Let me know if there are any other concerns, else I'll resume voting.
> > 
> > Cheers,
> > Leah
> > 
> > On Tue, Sep 1, 2020 at 11:17 AM John Roesler  wrote:
> > 
> >> Hi Leah and Sophie,
> >>
> >> Sorry for the delayed response.
> >>
> >> You can pass in pre-instantiated (and therefore arbirarily
> >> constructed) deserializers to the KafkaConsumer. However,
> >> this doesn't mean we should drop the configs. The same
> >> argument for dropping the configs implies that the consumer
> >> shouldn't have configs for setting the deserializers at all.
> >> This doesn't sound right, and I'm asking myself why. The
> >> most likely answer seems to me to be that you sometimes
> >> create a Consumer without invoking the Java constructor at
> >> all. For example, when you use the console-consumer. In that
> >> case, it would be indispensible to be able to fully
> >> configure the deserializers via a properties file.
> >>
> >> Therefore, I think we should go ahead and propose the new
> >> config. (Sorry for the flip-flop, Leah)
> >>
> >> Regarding the implicits, Leah's conclusion sounds good to
> >> me. Yuriy is not adding any implicit for this serde to the
> >> new class, and we'll just add an ERROR log to the existing
> >> implicit. Once KIP-616 is merged, the existing implicit will
> >> be deprecated along with all the other implicits in that
> >> class, so there will be two "forces" pushing people to the
> >> new interface, where they will discover the lack of an
> >> implicit, which then forces them to call the non-deprecated
> >> constructors directly.
> >>
> >> To answer Sophie's question, "implicit" is a feature of
> >> Scala that allows the type system to automatically resolve
> >> method arguments when there is just one possible argument in
> >> scope. There's a bunch of docs for it, so I won't waste a
> >> ton of e-ink on the details; the docs will be crystal clear
> >> just assuming you know all about monads and monoids and
> >> type-level programming ;)
> >>
> >> The punch line for us is that we provide implicits for the
> >> basic serdes, and also for turning pairs of
> >> serializers/deserializers into serdes, so you can avoid
> >> explicitly passing any serdes into Streams DSL operations,
> >> but also not have to fall back on the default key/value
> >> serde configs. Instead, the type system will plug in the
> >> right serde for the K/V types at each operation.
> >>
> >> We would _not_ add an implicit for a serde that we can't
> >> construct in a context-free way using just type information,
> >> as in this case. That's why Yuriy dropped the new implicit
> >> and why we're going to add an error to the existing
> >> implicit. On the other hand, removing the existing implicit
> >> will cause compiler errors when the type system is no longer
> >> able to find a suitable argument for an implicit parameter,
> >> so we don't want to just remove the existing implicit.
> >>
> >> Thanks,
> >> -John
> >>
> >> On Mon, 2020-08-31 at 16:28 -0500, Leah Thomas wrote:
> >>> Hey Sophie,
> >>>
> >>> Thanks for the catch! It makes sense that the consumer would accept a
> >>> deserializer somewhere, so we can definitely skip the additional
> >> configs. I
> >>> updated the KIP to reflect that.
> >>>
> >>> John seems to know Scala better than I do as well, but I think we need to
> >>> keep the current implicit that allows users to just pass in a serde and
> >> no
> >>> window size for backwards compatibility. It seems to me that based on the
> >>> discussion around KIP-616 ;,
> >> we
> >>> can pretty easily do John's third suggestion for handling this implicit:
> >>> logging an error message and passing to a non-deprecated constructor
> >> using
> >>> some default value. It seems from KIP-616 that most scala users will use
> >>> the new Serdes class anyways, and Yuriy is just removing these implicits
> >> so
> >>> it seems like whatever fix we decide for this class won't get used too
> >>> heavily.
> >>>
> >>> Cheers,
> >>> Leah
> >>>
> >>> On Thu, Aug 27, 2020 at 8:49 PM Sophie Blee-Goldman  >>>
> >>> wrote:
> >>>
>  Ok I'm definitely feeling pretty dumb now, but I was just thinking how
>  ridiculous
>  it is that the Consumer forces you to configure your Deserializer
> >> through
>  actual
>  config maps 

Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-09-02 Thread Matthias J. Sax
Thanks for the input Sophie. Those are all good points and I fully agree
with them.

When saying "pausing the processing threads" I only considered them in
`RUNNING` and thought we figure out the detail on the PR... Excellent catch!

Changing state transitions is to some extend backward incompatible, but
I think (IIRC) we did it in the past and I personally tend to find it
ok. That's why we cover those changes in a KIP.

-Matthias

On 9/2/20 6:18 PM, Sophie Blee-Goldman wrote:
> If we're going to add a new GLOBAL_RESTORING state to the KafkaStreams FSM,
> maybe it would make sense to add a new plain RESTORING state that we
> transition
> to when restoring non-global state stores following a rebalance. Right now
> all restoration
> occurs within the REBALANCING state, which is pretty misleading.
> Applications that
> have large amounts of state to restore will appear to be stuck rebalancing
> according to
> the state listener, when in fact the rebalance has completed long ago.
> Given that there
> are very much real scenarios where you actually *are *stuck rebalancing, it
> seems useful to
> distinguish plain restoration from more insidious cases that may require
> investigation and/or
> intervention.
> 
> I don't mean to hijack this KIP, I just think it would be odd to introduce
> GLOBAL_RESTORING
> when there is no other kind of RESTORING state. One question this brings
> up, and I
> apologize if this has already been addressed, is what to do when we are
> restoring
> both normal and global state stores? It sounds like we plan to pause the
> StreamThreads
> entirely, but there doesn't seem to be any reason not to allow regular
> state restoration -- or
> even standby processing -- while the global state is restoring.Given the
> current effort to move
> restoration & standbys to a separate thread, allowing them to continue
> while pausing
> only the StreamThread seems quite natural.
> 
> Assuming that we actually do allow both types of restoration to occur at
> the same time,
> and if we did add a plain RESTORING state as well, which state should we
> end up in?
> AFAICT the main reason for having a distinct {GLOBAL_}RESTORING state is to
> alert
> users of the non-progress of their active tasks. In both cases, the active
> task is unable
> to continue until restoration has complete, so why distinguish between the
> two at all?
> Would it make sense to avoid a special GLOBAL_RESTORING state and just
> introduce
> a single unified RESTORING state to cover both the regular and global case?
> Just a thought
> 
> My only concern is that this might be considered a breaking change: users
> might be
> looking for the REBALANCING -> RUNNING transition specifically in order to
> alert when
> the application has started up, and we would no long go directly from
> REBALANCING to
>  RUNNING. I think we actually did/do this ourselves in a number of
> integration tests and
> possibly in some examples. That said, it seems more appropriate to just
> listen for
> the RUNNING state rather than for a specific transition, and we should
> encourage users
> to do so rather than go out of our way to support transition-type state
> listeners.
> 
> Cheers,
> Sophie
> 
> On Wed, Sep 2, 2020 at 5:53 PM Matthias J. Sax  wrote:
> 
>> I think this makes sense.
>>
>> When we introduce this new state, we might also tackle the jira a
>> mentioned. If there is a global thread, on startup of a `KafakStreams`
>> client we should not transit to `REBALANCING` but to the new state, and
>> maybe also make the "bootstrapping" non-blocking.
>>
>> I guess it's worth to mention this in the KIP.
>>
>> Btw: The new state for KafkaStreams should also be part of the KIP as it
>> is a public API change, too.
>>
>>
>> -Matthias
>>
>> On 8/29/20 9:37 AM, John Roesler wrote:
>>> Hi Navinder,
>>>
>>> Thanks for the ping. Yes, that all sounds right to me. The name
>> “RESTORING_GLOBAL” sounds fine, too.
>>>
>>> I think as far as warnings go, we’d just propose to mention it in the
>> javadoc of the relevant methods that the given topics should be compacted.
>>>
>>> Thanks!
>>> -John
>>>
>>> On Fri, Aug 28, 2020, at 12:42, Navinder Brar wrote:
 Gentle ping.

 ~ Navinder
 On Wednesday, 19 August, 2020, 06:59:58 pm IST, Navinder Brar
  wrote:


 Thanks Matthias & John,



 I am glad we are converging towards an understanding. So, to summarize,

 we will still keep treating this change in KIP and instead of providing
>> a reset

 strategy, we will cleanup, and reset to earliest and build the state.

 When we hit the exception and we are building the state, we will stop
>> all

 processing and change the state of KafkaStreams to something like

 “RESTORING_GLOBAL” or the like.



 How do we plan to educate users on the non desired effects of using

 non-compacted global topics? (via the KIP itself?)


 +1 on changing the KTable behavior, 

Re: [DISCUSS] KIP-655: Windowed "Distinct" Operation for KStream

2020-09-02 Thread Matthias J. Sax
Thanks for the KIP Ivan. Having a built-in deduplication operator is for
sure a good addition.

Couple of questions:

(1) Using the `idExtractor` has the issue that data might not be
co-partitioned as you mentioned in the KIP. Thus, I am wondering if it
might be better to do deduplication only on the key? If one sets a new
key upstream (ie, extracts the deduplication id into the key), the
`distinct` operator could automatically repartition the data and thus we
would avoid user errors.

(2) What is the motivation for allowing the `idExtractor` to return
`null`? Might be good to have some use-case examples for this feature.

(2) Is using a `TimeWindow` really what we want? I was wondering if a
`SlidingWindow` might be better? Or maybe we need a new type of window?

It would be helpful if you could describe potential use cases in more
detail. -- I am mainly wondering about hopping window? Each record would
always falls into multiple window and thus would be emitted multiple
times, ie, each time the window closes. Is this really a valid use case?

It seems that for de-duplication, one wants to have some "expiration
time", ie, for each ID, deduplicate all consecutive records with the
same ID and emit the first record after the "expiration time" passed. In
terms of a window, this would mean that the window starts at `r.ts` and
ends at `r.ts + windowSize`, ie, the window is aligned to the data.
TimeWindows are aligned to the epoch though. While `SlidingWindows` also
align to the data, for the aggregation use-case they go backward in
time, while we need a window that goes forward in time. It's an open
question if we can re-purpose `SlidingWindows` -- it might be ok the
make the alignment (into the past vs into the future) an operator
dependent behavior?

(3) `isPersistent` -- instead of using this flag, it seems better to
allow users to pass in a `Materialized` parameter next to
`DistinctParameters` to configure the state store?

(4) I am wondering if we should really have 4 overloads for
`DistinctParameters.with()`? It might be better to have one overload
with all require parameters, and add optional parameters using the
builder pattern? This seems to follow the DSL Grammer proposal.

(5) Even if it might be an implementation detail (and maybe the KIP
itself does not need to mention it), can you give a high level overview
how you intent to implement it (that would be easier to grog, compared
to reading the PR).



-Matthias

On 8/23/20 4:29 PM, Ivan Ponomarev wrote:
> Sorry, I forgot to add [DISCUSS] tag to the topic
> 
> 24.08.2020 2:27, Ivan Ponomarev пишет:
>> Hello,
>>
>> I'd like to start a discussion for KIP-655.
>>
>> KIP-655:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-655%3A+Windowed+Distinct+Operation+for+Kafka+Streams+API
>>
>>
>> I also opened a proof-of-concept PR for you to experiment with the API:
>>
>> PR#9210: https://github.com/apache/kafka/pull/9210
>>
>> Regards,
>>
>> Ivan Ponomarev
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-614: Add Prefix Scan support for State Stores

2020-09-02 Thread Guozhang Wang
Thanks for the KIP Sagar. I'm +1 (binding) too.


Guozhang

On Tue, Sep 1, 2020 at 1:24 PM Bill Bejeck  wrote:

> Thanks for the KIP! This is a great addition to the streams API.
>
> +1 (binding)
>
> -Bill
>
> On Tue, Sep 1, 2020 at 12:33 PM Sagar  wrote:
>
> > Hi All,
> >
> > Bumping the thread again !
> >
> > Thanks!
> > Sagar.
> >
> > On Wed, Aug 5, 2020 at 12:08 AM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Thanks Sagar! +1 (non-binding)
> > >
> > > Sophie
> > >
> > > On Sun, Aug 2, 2020 at 11:37 PM Sagar 
> wrote:
> > >
> > > > Hi All,
> > > >
> > > > Just thought of bumping this voting thread again to see if we can
> form
> > > any
> > > > consensus around this.
> > > >
> > > > Thanks!
> > > > Sagar.
> > > >
> > > >
> > > > On Mon, Jul 20, 2020 at 4:21 AM Adam Bellemare <
> > adam.bellem...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > LGTM
> > > > > +1 non-binding
> > > > >
> > > > > On Sun, Jul 19, 2020 at 4:13 AM Sagar 
> > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > Bumping this thread to see if there are any feedbacks.
> > > > > >
> > > > > > Thanks!
> > > > > > Sagar.
> > > > > >
> > > > > > On Tue, Jul 14, 2020 at 9:49 AM John Roesler <
> vvcep...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > Thanks for the KIP, Sagar!
> > > > > > >
> > > > > > > I’m +1 (binding)
> > > > > > >
> > > > > > > -John
> > > > > > >
> > > > > > > On Sun, Jul 12, 2020, at 02:05, Sagar wrote:
> > > > > > > > Hi All,
> > > > > > > >
> > > > > > > > I would like to start a new voting thread for the below KIP
> to
> > > add
> > > > > > prefix
> > > > > > > > scan support to state stores:
> > > > > > > >
> > > > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > > 614%3A+Add+Prefix+Scan+support+for+State+Stores
> > > > > > > > <
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-614%3A+Add+Prefix+Scan+support+for+State+Stores
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > > Sagar.
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


-- 
-- Guozhang


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-09-02 Thread Sophie Blee-Goldman
If we're going to add a new GLOBAL_RESTORING state to the KafkaStreams FSM,
maybe it would make sense to add a new plain RESTORING state that we
transition
to when restoring non-global state stores following a rebalance. Right now
all restoration
occurs within the REBALANCING state, which is pretty misleading.
Applications that
have large amounts of state to restore will appear to be stuck rebalancing
according to
the state listener, when in fact the rebalance has completed long ago.
Given that there
are very much real scenarios where you actually *are *stuck rebalancing, it
seems useful to
distinguish plain restoration from more insidious cases that may require
investigation and/or
intervention.

I don't mean to hijack this KIP, I just think it would be odd to introduce
GLOBAL_RESTORING
when there is no other kind of RESTORING state. One question this brings
up, and I
apologize if this has already been addressed, is what to do when we are
restoring
both normal and global state stores? It sounds like we plan to pause the
StreamThreads
entirely, but there doesn't seem to be any reason not to allow regular
state restoration -- or
even standby processing -- while the global state is restoring.Given the
current effort to move
restoration & standbys to a separate thread, allowing them to continue
while pausing
only the StreamThread seems quite natural.

Assuming that we actually do allow both types of restoration to occur at
the same time,
and if we did add a plain RESTORING state as well, which state should we
end up in?
AFAICT the main reason for having a distinct {GLOBAL_}RESTORING state is to
alert
users of the non-progress of their active tasks. In both cases, the active
task is unable
to continue until restoration has complete, so why distinguish between the
two at all?
Would it make sense to avoid a special GLOBAL_RESTORING state and just
introduce
a single unified RESTORING state to cover both the regular and global case?
Just a thought

My only concern is that this might be considered a breaking change: users
might be
looking for the REBALANCING -> RUNNING transition specifically in order to
alert when
the application has started up, and we would no long go directly from
REBALANCING to
 RUNNING. I think we actually did/do this ourselves in a number of
integration tests and
possibly in some examples. That said, it seems more appropriate to just
listen for
the RUNNING state rather than for a specific transition, and we should
encourage users
to do so rather than go out of our way to support transition-type state
listeners.

Cheers,
Sophie

On Wed, Sep 2, 2020 at 5:53 PM Matthias J. Sax  wrote:

> I think this makes sense.
>
> When we introduce this new state, we might also tackle the jira a
> mentioned. If there is a global thread, on startup of a `KafakStreams`
> client we should not transit to `REBALANCING` but to the new state, and
> maybe also make the "bootstrapping" non-blocking.
>
> I guess it's worth to mention this in the KIP.
>
> Btw: The new state for KafkaStreams should also be part of the KIP as it
> is a public API change, too.
>
>
> -Matthias
>
> On 8/29/20 9:37 AM, John Roesler wrote:
> > Hi Navinder,
> >
> > Thanks for the ping. Yes, that all sounds right to me. The name
> “RESTORING_GLOBAL” sounds fine, too.
> >
> > I think as far as warnings go, we’d just propose to mention it in the
> javadoc of the relevant methods that the given topics should be compacted.
> >
> > Thanks!
> > -John
> >
> > On Fri, Aug 28, 2020, at 12:42, Navinder Brar wrote:
> >> Gentle ping.
> >>
> >> ~ Navinder
> >> On Wednesday, 19 August, 2020, 06:59:58 pm IST, Navinder Brar
> >>  wrote:
> >>
> >>
> >> Thanks Matthias & John,
> >>
> >>
> >>
> >> I am glad we are converging towards an understanding. So, to summarize,
> >>
> >> we will still keep treating this change in KIP and instead of providing
> a reset
> >>
> >> strategy, we will cleanup, and reset to earliest and build the state.
> >>
> >> When we hit the exception and we are building the state, we will stop
> all
> >>
> >> processing and change the state of KafkaStreams to something like
> >>
> >> “RESTORING_GLOBAL” or the like.
> >>
> >>
> >>
> >> How do we plan to educate users on the non desired effects of using
> >>
> >> non-compacted global topics? (via the KIP itself?)
> >>
> >>
> >> +1 on changing the KTable behavior, reset policy for global, connecting
> >> processors to global for a later stage when demanded.
> >>
> >> Regards,
> >> Navinder
> >> On Wednesday, 19 August, 2020, 01:00:58 pm IST, Matthias J. Sax
> >>  wrote:
> >>
> >>  Your observation is correct. Connecting (regular) stores to processors
> >> is necessary to "merge" sub-topologies into single ones if a store is
> >> shared. -- For global stores, the structure of the program does not
> >> change and thus connecting srocessors to global stores is not required.
> >>
> >> Also given our experience with restoring regular state stores (ie,
> >> partial processing of task that 

Re: Can't find any sendfile system call trace from Kafka process?

2020-09-02 Thread Haruki Okada
There are two cases that zero-copy fetch thanks to sendfile don't work.

- SSL encryption is enabled
  * Need to encrypt on Kafka process before sending to client
  -
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/SslTransportLayer.java#L946
  * Unlike plaintext transport layer which directly write to socket:
  -
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/network/PlaintextTransportLayer.java#L214
- Message down conversion happens
  * refs:
https://kafka.apache.org/documentation/#upgrade_10_performance_impact

Does your environment match above cases?

2020年9月1日(火) 9:05 Ming Liu :

> Hi Kafka dev community,
>  As we know, one major reason that Kafka is fast is because it is using
> sendfile() for zero copy, as what it described at
> https://kafka.apache.org/documentation/#producerconfigs,
>
>
>
> *This combination of pagecache and sendfile means that on a Kafka cluster
> where the consumers are mostly caught up you will see no read activity on
> the disks whatsoever as they will be serving data entirely from cache.*
>
> However, when I ran tracing on all my kafka brokers, I didn't get a
> single sendfile system call, why is this? Does it eventually translate to
> plain read/write syscalls?
>
> sudo ./syscount -p 126806 -d 30
> Tracing syscalls, printing top 10... Ctrl+C to quit.
> [17:44:10]
> SYSCALL  COUNT
> epoll_wait108482
> write  107165
> epoll_ctl 95058
> futex   86716
> read   86388
> pread   26910
> fstat   9213
> getrusage  120
> close27
> open 21
>


-- 

Okada Haruki
ocadar...@gmail.com



Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-09-02 Thread Matthias J. Sax
I think this makes sense.

When we introduce this new state, we might also tackle the jira a
mentioned. If there is a global thread, on startup of a `KafakStreams`
client we should not transit to `REBALANCING` but to the new state, and
maybe also make the "bootstrapping" non-blocking.

I guess it's worth to mention this in the KIP.

Btw: The new state for KafkaStreams should also be part of the KIP as it
is a public API change, too.


-Matthias

On 8/29/20 9:37 AM, John Roesler wrote:
> Hi Navinder, 
> 
> Thanks for the ping. Yes, that all sounds right to me. The name 
> “RESTORING_GLOBAL” sounds fine, too. 
> 
> I think as far as warnings go, we’d just propose to mention it in the javadoc 
> of the relevant methods that the given topics should be compacted. 
> 
> Thanks!
> -John
> 
> On Fri, Aug 28, 2020, at 12:42, Navinder Brar wrote:
>> Gentle ping.
>>
>> ~ Navinder
>> On Wednesday, 19 August, 2020, 06:59:58 pm IST, Navinder Brar 
>>  wrote:  
>>  
>>   
>> Thanks Matthias & John, 
>>
>>
>>
>> I am glad we are converging towards an understanding. So, to summarize, 
>>
>> we will still keep treating this change in KIP and instead of providing a 
>> reset
>>
>> strategy, we will cleanup, and reset to earliest and build the state. 
>>
>> When we hit the exception and we are building the state, we will stop all 
>>
>> processing and change the state of KafkaStreams to something like 
>>
>> “RESTORING_GLOBAL” or the like. 
>>
>>
>>
>> How do we plan to educate users on the non desired effects of using 
>>
>> non-compacted global topics? (via the KIP itself?)
>>
>>
>> +1 on changing the KTable behavior, reset policy for global, connecting 
>> processors to global for a later stage when demanded.
>>
>> Regards,
>> Navinder
>>     On Wednesday, 19 August, 2020, 01:00:58 pm IST, Matthias J. Sax 
>>  wrote:  
>>  
>>  Your observation is correct. Connecting (regular) stores to processors
>> is necessary to "merge" sub-topologies into single ones if a store is
>> shared. -- For global stores, the structure of the program does not
>> change and thus connecting srocessors to global stores is not required.
>>
>> Also given our experience with restoring regular state stores (ie,
>> partial processing of task that don't need restore), it seems better to
>> pause processing and move all CPU and network resources to the global
>> thread to rebuild the global store as soon as possible instead of
>> potentially slowing down the restore in order to make progress on some
>> tasks.
>>
>> Of course, if we collect real world experience and it becomes an issue,
>> we could still try to change it?
>>
>>
>> -Matthias
>>
>>
>> On 8/18/20 3:31 PM, John Roesler wrote:
>>> Thanks Matthias,
>>>
>>> Sounds good. I'm on board with no public API change and just
>>> recovering instead of crashing.
>>>
>>> Also, to be clear, I wouldn't drag KTables into it; I was
>>> just trying to wrap my head around the congruity of our
>>> choice for GlobalKTable with respect to KTable.
>>>
>>> I agree that whatever we decide to do would probably also
>>> resolve KAFKA-7380.
>>>
>>> Moving on to discuss the behavior change, I'm wondering if
>>> we really need to block all the StreamThreads. It seems like
>>> we only need to prevent processing on any task that's
>>> connected to the GlobalStore. 
>>>
>>> I just took a look at the topology building code, and it
>>> actually seems that connections to global stores don't need
>>> to be declared. That's a bummer, since it means that we
>>> really do have to stop all processing while the global
>>> thread catches up.
>>>
>>> Changing this seems like it'd be out of scope right now, but
>>> I bring it up in case I'm wrong and it actually is possible
>>> to know which specific tasks need to be synchronized with
>>> which global state stores. If we could know that, then we'd
>>> only have to block some of the tasks, not all of the
>>> threads.
>>>
>>> Thanks,
>>> -John
>>>
>>>
>>> On Tue, 2020-08-18 at 14:10 -0700, Matthias J. Sax wrote:
 Thanks for the discussion.

 I agree that this KIP is justified in any case -- even if we don't
 change public API, as the change in behavior is significant.

 A better documentation for cleanup policy is always good (even if I am
 not aware of any concrete complaints atm that users were not aware of
 the implications). Of course, for a regular KTable, one can
 enable/disable the source-topic-changelog optimization and thus can use
 a non-compacted topic for this case, what is quite a difference to
 global stores/tables; so maybe it's worth to point out this difference
 explicitly.

 As mentioned before, the main purpose of the original Jira was to avoid
 the crash situation but to allow for auto-recovering while it was an
 open question if it makes sense / would be useful to allow users to
 specify a custom reset policy instead of using a hard-coded "earliest"
 strategy. -- It seem it's still unclear if 

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-09-02 Thread Colin McCabe
Hi Rajini & Ron,

Thanks, this is an interesting discussion.

I think Ron made a good point earlier that if you have ALTER on CLUSTER then 
you can give yourself whatever other ACLs you want.  And this is true even if 
you are authenticated via a delegation token.  So splitting the ability to 
alter scram users off into a separate ACL doesn't really add any security, 
compared to just letting users with ALTER on CLUSTER do it.

In the past, we've usually started with a simple permissions model, and then 
made it more complex when we got user feedback.  The prefix ACLs we added for 
creating and deleting topics were a great example of a user-driven improvement. 
 On the other hand, in some cases where we started with a simple model, nobody 
has proposed making it more complex.  For example, nobody has proposed more 
complex permissions for starting partition reassignments.

So I'd really like to defer this discussion about adding more complex ACLs for 
SCRAM until we get feedback from users.  Maybe we could have a follow-on KIP to 
do things like give users the ability to change their own SCRAM passwords, or 
add a new entity type, and so on.  It feels a bit out of scope for this KIP, 
since the ZooKeeper-based approach we're replacing didn't support fine-grained 
permissions.  So, can we defer this discussion a bit?

best,
Colin


On Wed, Sep 2, 2020, at 13:08, Rajini Sivaram wrote:
> Hi Ron,
> 
> Sounds good to me, thank you.
> 
> Regards,
> 
> Rajini
> 
> 
> On Wed, Sep 2, 2020 at 8:00 PM Ron Dagostino  wrote:
> 
> > Thanks, Rajini.  That's a good point about authorizing user creation and
> > ACL creation separately to enable that separation as an *option* -- I agree
> > with that, which I think argues for a separate ALTER_USER operation (or
> > ALTER_USER_CREDENTIAL operation; not sure what the best name is, but
> > probably the first one since it is shorter and these names tend to be
> > short).  If we separate it out like this, then I don't think there is a
> > need to restrict delegation tokens from being able to
> > ALTER_USER_SCRAM_CREDENTIALS -- since it is possible to simply never
> > delegate authority to act as a user with those permissions.  Do you agree
> > with that, or do you still believe we should explicitly restrict delegation
> > tokens from ALTER_USER_SCRAM_CREDENTIALS?  I personally still agree with
> > Colin's point about avoiding second-class citizenry.
> >
> > That's also a good point about users perhaps wanting to change their own
> > password.  I don't think that has come up.  If we were to add this, then it
> > would be the case that a user would be authorized to change their own
> > password at all times.  But in this case I think we would have to restrict
> > delegation tokens from changing the password of the underlying credential
> > since the user doesn't know the password to that account -- and due to that
> > lack of knowledge I think this is not a case of being a second-class
> > citizen and the restriction is justifiable.
> >
> > So, to summarize, I am tentatively proposing the following:
> >
> > 1) ALTER_USER_SCRAM_CREDENTIALS is allowed on any credential if a new
> > ALTER_USER operation on the CLUSTER resource is authorized -- even if
> > authenticated via delegation token
> > 2) Assuming (1) does not apply, ALTER_USER_SCRAM_CREDENTIALS is also
> > allowed if the only alterations requested are alterations to one or more
> > credentials associated with the authenticated user making the request, with
> > the added caveat that the authentication must not have been via delegation
> > token (i.e. you can't alter credentials, in the absence of (1) above, for
> > the user who delegated their authority to you).
> >
> > Thoughts?
> >
> > Ron
> >
> >
> > On Wed, Sep 2, 2020 at 2:16 PM Rajini Sivaram 
> > wrote:
> >
> > > Hi Ron,
> > >
> > > Not sure the person who creates/manages users is always the person who
> > > controls access to Kafka resources. Separate ACLs gives the flexibility
> > to
> > > keep them separate while you can still grant both to the user, while a
> > > combined ACL means that they can only be granted together. A related
> > > question is about password change. In the current model, User:Alice
> > > authenticated as Alice cannot change Alice's password without Alter
> > > permissions on the Cluster. The delegation token model where a user has
> > > more control over their own credential seems more appropriate in this
> > case.
> > > Not sure if we considered and rejected that approach.
> > >
> > >
> > > On Wed, Sep 2, 2020 at 5:57 PM Ron Dagostino  wrote:
> > >
> > > > Hi Rajini.  Thanks for the explanation.
> > > >
> > > > I think these are the APIs that are authorized via the ALTER CLUSTER
> > > > operation, none of which are restricted when authenticating via
> > > delegation
> > > > token:
> > > >
> > > > ALTER_PARTITION_REASSIGNMENTS
> > > > ALTER_REPLICA_LOG_DIRS
> > > > CREATE_ACL
> > > > DELETE_ACL
> > > > ELECT_LEADERS
> > > >
> > > > I think if 

Re: [VOTE] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2020-09-02 Thread Matthias J. Sax
+1 (binding)

On 8/26/20 8:02 AM, John Roesler wrote:
> Hi all,
> 
> I've just sent a new message to the DISCUSS thread. We
> forgot to include the Scala API in the proposal.
> 
> Thanks,
> -John
> 
> On Mon, 2020-08-24 at 18:00 -0700, Sophie Blee-Goldman
> wrote:
>> Thanks for the KIP! +1 (non-binding)
>>
>> Sophie
>>
>> On Mon, Aug 24, 2020 at 5:06 PM John Roesler  wrote:
>>
>>> Thanks Leah,
>>> I’m +1 (binding)
>>>
>>> -John
>>>
>>> On Mon, Aug 24, 2020, at 16:54, Leah Thomas wrote:
 Hi everyone,

 I'd like to kick-off the vote for KIP-659: Improve
 TimeWindowedDeserializer
 and TimeWindowedSerde to handle window size.

>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-659%3A+Improve+TimeWindowedDeserializer+and+TimeWindowedSerde+to+handle+window+size
 Thanks,
 Leah

> 



signature.asc
Description: OpenPGP digital signature


Re: [DISCUSS] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2020-09-02 Thread Matthias J. Sax
Thanks for the KIP and the detailed discussion. I guess this all makes
sense.

-Matthias

On 9/2/20 1:28 PM, Leah Thomas wrote:
> Hey John,
> 
> I see what you say about the console consumer in particular. I don't think
> that adding the extra config would *hurt* at all, so I'm good with keeping
> that in the KIP. I re-updated the KIP proposal to include the configs.
> 
> The serde resolution sounds good to me as well, I added a few lines in the
> KIP about logging an error when the *timeWindowedSerde *implicit is called.
> 
> Let me know if there are any other concerns, else I'll resume voting.
> 
> Cheers,
> Leah
> 
> On Tue, Sep 1, 2020 at 11:17 AM John Roesler  wrote:
> 
>> Hi Leah and Sophie,
>>
>> Sorry for the delayed response.
>>
>> You can pass in pre-instantiated (and therefore arbirarily
>> constructed) deserializers to the KafkaConsumer. However,
>> this doesn't mean we should drop the configs. The same
>> argument for dropping the configs implies that the consumer
>> shouldn't have configs for setting the deserializers at all.
>> This doesn't sound right, and I'm asking myself why. The
>> most likely answer seems to me to be that you sometimes
>> create a Consumer without invoking the Java constructor at
>> all. For example, when you use the console-consumer. In that
>> case, it would be indispensible to be able to fully
>> configure the deserializers via a properties file.
>>
>> Therefore, I think we should go ahead and propose the new
>> config. (Sorry for the flip-flop, Leah)
>>
>> Regarding the implicits, Leah's conclusion sounds good to
>> me. Yuriy is not adding any implicit for this serde to the
>> new class, and we'll just add an ERROR log to the existing
>> implicit. Once KIP-616 is merged, the existing implicit will
>> be deprecated along with all the other implicits in that
>> class, so there will be two "forces" pushing people to the
>> new interface, where they will discover the lack of an
>> implicit, which then forces them to call the non-deprecated
>> constructors directly.
>>
>> To answer Sophie's question, "implicit" is a feature of
>> Scala that allows the type system to automatically resolve
>> method arguments when there is just one possible argument in
>> scope. There's a bunch of docs for it, so I won't waste a
>> ton of e-ink on the details; the docs will be crystal clear
>> just assuming you know all about monads and monoids and
>> type-level programming ;)
>>
>> The punch line for us is that we provide implicits for the
>> basic serdes, and also for turning pairs of
>> serializers/deserializers into serdes, so you can avoid
>> explicitly passing any serdes into Streams DSL operations,
>> but also not have to fall back on the default key/value
>> serde configs. Instead, the type system will plug in the
>> right serde for the K/V types at each operation.
>>
>> We would _not_ add an implicit for a serde that we can't
>> construct in a context-free way using just type information,
>> as in this case. That's why Yuriy dropped the new implicit
>> and why we're going to add an error to the existing
>> implicit. On the other hand, removing the existing implicit
>> will cause compiler errors when the type system is no longer
>> able to find a suitable argument for an implicit parameter,
>> so we don't want to just remove the existing implicit.
>>
>> Thanks,
>> -John
>>
>> On Mon, 2020-08-31 at 16:28 -0500, Leah Thomas wrote:
>>> Hey Sophie,
>>>
>>> Thanks for the catch! It makes sense that the consumer would accept a
>>> deserializer somewhere, so we can definitely skip the additional
>> configs. I
>>> updated the KIP to reflect that.
>>>
>>> John seems to know Scala better than I do as well, but I think we need to
>>> keep the current implicit that allows users to just pass in a serde and
>> no
>>> window size for backwards compatibility. It seems to me that based on the
>>> discussion around KIP-616 ;,
>> we
>>> can pretty easily do John's third suggestion for handling this implicit:
>>> logging an error message and passing to a non-deprecated constructor
>> using
>>> some default value. It seems from KIP-616 that most scala users will use
>>> the new Serdes class anyways, and Yuriy is just removing these implicits
>> so
>>> it seems like whatever fix we decide for this class won't get used too
>>> heavily.
>>>
>>> Cheers,
>>> Leah
>>>
>>> On Thu, Aug 27, 2020 at 8:49 PM Sophie Blee-Goldman >>
>>> wrote:
>>>
 Ok I'm definitely feeling pretty dumb now, but I was just thinking how
 ridiculous
 it is that the Consumer forces you to configure your Deserializer
>> through
 actual
 config maps instead of just taking the ones you pass in directly. So I
 thought
 "why not just fix the Consumer to allow passing in an actual
>> Deserializer
 object"
 and went to go through the code in case there's some legitimate reason
>> why
 not,
 and what do you know. You actually can pass in an actual 

Re: Request for KIP creation access

2020-09-02 Thread Matthias J. Sax
Added you to the wiki.

On 9/2/20 12:45 PM, satyanarayan komandur wrote:
> Hi,
> I would like to have the access for creating KIP for Kafka. My wiki id is
> ksbalan2016. I have opened a Jira ticket and had brief interaction with
> Mathias on stack overflow about this ticket. He would like me to open a KIp
> in this regards.
> 
> Thanks for your help
> Please let me know if you have additional questions
> 
> Thanks again
> Balan
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #43

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix build scala 2.12 build after KAFKA-10020 (#9245)


--
[...truncated 6.52 MB...]
org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #40

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix build scala 2.12 build after KAFKA-10020 (#9245)


--
[...truncated 6.46 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowNoSuchElementExceptionForUnusedOutputTopicWithDynamicRouting[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #42

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Include call name in TimeoutException (#8050)

[github] MINOR: Fix build scala 2.12 build after KAFKA-10020 (#9245)


--
[...truncated 6.51 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 

Re: [DISCUSS] KIP-659: Improve TimeWindowedDeserializer and TimeWindowedSerde to handle window size

2020-09-02 Thread Leah Thomas
Hey John,

I see what you say about the console consumer in particular. I don't think
that adding the extra config would *hurt* at all, so I'm good with keeping
that in the KIP. I re-updated the KIP proposal to include the configs.

The serde resolution sounds good to me as well, I added a few lines in the
KIP about logging an error when the *timeWindowedSerde *implicit is called.

Let me know if there are any other concerns, else I'll resume voting.

Cheers,
Leah

On Tue, Sep 1, 2020 at 11:17 AM John Roesler  wrote:

> Hi Leah and Sophie,
>
> Sorry for the delayed response.
>
> You can pass in pre-instantiated (and therefore arbirarily
> constructed) deserializers to the KafkaConsumer. However,
> this doesn't mean we should drop the configs. The same
> argument for dropping the configs implies that the consumer
> shouldn't have configs for setting the deserializers at all.
> This doesn't sound right, and I'm asking myself why. The
> most likely answer seems to me to be that you sometimes
> create a Consumer without invoking the Java constructor at
> all. For example, when you use the console-consumer. In that
> case, it would be indispensible to be able to fully
> configure the deserializers via a properties file.
>
> Therefore, I think we should go ahead and propose the new
> config. (Sorry for the flip-flop, Leah)
>
> Regarding the implicits, Leah's conclusion sounds good to
> me. Yuriy is not adding any implicit for this serde to the
> new class, and we'll just add an ERROR log to the existing
> implicit. Once KIP-616 is merged, the existing implicit will
> be deprecated along with all the other implicits in that
> class, so there will be two "forces" pushing people to the
> new interface, where they will discover the lack of an
> implicit, which then forces them to call the non-deprecated
> constructors directly.
>
> To answer Sophie's question, "implicit" is a feature of
> Scala that allows the type system to automatically resolve
> method arguments when there is just one possible argument in
> scope. There's a bunch of docs for it, so I won't waste a
> ton of e-ink on the details; the docs will be crystal clear
> just assuming you know all about monads and monoids and
> type-level programming ;)
>
> The punch line for us is that we provide implicits for the
> basic serdes, and also for turning pairs of
> serializers/deserializers into serdes, so you can avoid
> explicitly passing any serdes into Streams DSL operations,
> but also not have to fall back on the default key/value
> serde configs. Instead, the type system will plug in the
> right serde for the K/V types at each operation.
>
> We would _not_ add an implicit for a serde that we can't
> construct in a context-free way using just type information,
> as in this case. That's why Yuriy dropped the new implicit
> and why we're going to add an error to the existing
> implicit. On the other hand, removing the existing implicit
> will cause compiler errors when the type system is no longer
> able to find a suitable argument for an implicit parameter,
> so we don't want to just remove the existing implicit.
>
> Thanks,
> -John
>
> On Mon, 2020-08-31 at 16:28 -0500, Leah Thomas wrote:
> > Hey Sophie,
> >
> > Thanks for the catch! It makes sense that the consumer would accept a
> > deserializer somewhere, so we can definitely skip the additional
> configs. I
> > updated the KIP to reflect that.
> >
> > John seems to know Scala better than I do as well, but I think we need to
> > keep the current implicit that allows users to just pass in a serde and
> no
> > window size for backwards compatibility. It seems to me that based on the
> > discussion around KIP-616 ;,
> we
> > can pretty easily do John's third suggestion for handling this implicit:
> > logging an error message and passing to a non-deprecated constructor
> using
> > some default value. It seems from KIP-616 that most scala users will use
> > the new Serdes class anyways, and Yuriy is just removing these implicits
> so
> > it seems like whatever fix we decide for this class won't get used too
> > heavily.
> >
> > Cheers,
> > Leah
> >
> > On Thu, Aug 27, 2020 at 8:49 PM Sophie Blee-Goldman  >
> > wrote:
> >
> > > Ok I'm definitely feeling pretty dumb now, but I was just thinking how
> > > ridiculous
> > > it is that the Consumer forces you to configure your Deserializer
> through
> > > actual
> > > config maps instead of just taking the ones you pass in directly. So I
> > > thought
> > > "why not just fix the Consumer to allow passing in an actual
> Deserializer
> > > object"
> > > and went to go through the code in case there's some legitimate reason
> why
> > > not,
> > > and what do you know. You actually can pass in an actual Deserializer
> > > object!
> > > There is a KafkaConsumer constructor that accepts a key and value
> > > Deserializer,
> > > and doesn't instantiate or configure a new one if provided in this way.
> > > Duh.
> > >
> > > 

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-09-02 Thread Rajini Sivaram
Hi Ron,

Sounds good to me, thank you.

Regards,

Rajini


On Wed, Sep 2, 2020 at 8:00 PM Ron Dagostino  wrote:

> Thanks, Rajini.  That's a good point about authorizing user creation and
> ACL creation separately to enable that separation as an *option* -- I agree
> with that, which I think argues for a separate ALTER_USER operation (or
> ALTER_USER_CREDENTIAL operation; not sure what the best name is, but
> probably the first one since it is shorter and these names tend to be
> short).  If we separate it out like this, then I don't think there is a
> need to restrict delegation tokens from being able to
> ALTER_USER_SCRAM_CREDENTIALS -- since it is possible to simply never
> delegate authority to act as a user with those permissions.  Do you agree
> with that, or do you still believe we should explicitly restrict delegation
> tokens from ALTER_USER_SCRAM_CREDENTIALS?  I personally still agree with
> Colin's point about avoiding second-class citizenry.
>
> That's also a good point about users perhaps wanting to change their own
> password.  I don't think that has come up.  If we were to add this, then it
> would be the case that a user would be authorized to change their own
> password at all times.  But in this case I think we would have to restrict
> delegation tokens from changing the password of the underlying credential
> since the user doesn't know the password to that account -- and due to that
> lack of knowledge I think this is not a case of being a second-class
> citizen and the restriction is justifiable.
>
> So, to summarize, I am tentatively proposing the following:
>
> 1) ALTER_USER_SCRAM_CREDENTIALS is allowed on any credential if a new
> ALTER_USER operation on the CLUSTER resource is authorized -- even if
> authenticated via delegation token
> 2) Assuming (1) does not apply, ALTER_USER_SCRAM_CREDENTIALS is also
> allowed if the only alterations requested are alterations to one or more
> credentials associated with the authenticated user making the request, with
> the added caveat that the authentication must not have been via delegation
> token (i.e. you can't alter credentials, in the absence of (1) above, for
> the user who delegated their authority to you).
>
> Thoughts?
>
> Ron
>
>
> On Wed, Sep 2, 2020 at 2:16 PM Rajini Sivaram 
> wrote:
>
> > Hi Ron,
> >
> > Not sure the person who creates/manages users is always the person who
> > controls access to Kafka resources. Separate ACLs gives the flexibility
> to
> > keep them separate while you can still grant both to the user, while a
> > combined ACL means that they can only be granted together. A related
> > question is about password change. In the current model, User:Alice
> > authenticated as Alice cannot change Alice's password without Alter
> > permissions on the Cluster. The delegation token model where a user has
> > more control over their own credential seems more appropriate in this
> case.
> > Not sure if we considered and rejected that approach.
> >
> >
> > On Wed, Sep 2, 2020 at 5:57 PM Ron Dagostino  wrote:
> >
> > > Hi Rajini.  Thanks for the explanation.
> > >
> > > I think these are the APIs that are authorized via the ALTER CLUSTER
> > > operation, none of which are restricted when authenticating via
> > delegation
> > > token:
> > >
> > > ALTER_PARTITION_REASSIGNMENTS
> > > ALTER_REPLICA_LOG_DIRS
> > > CREATE_ACL
> > > DELETE_ACL
> > > ELECT_LEADERS
> > >
> > > I think if we are going to ALTER_USER_SCRAM_CREDENTIALS then we are
> > likely
> > > going to want to CREATE_ACL as well -- it feels like there's no sense
> in
> > > creating a user but then not being able to authorize the user to do
> > > anything.  (Unless I am wrong here?). If this is correct, then
> > > following that to its logical conclusion, it feels like we should
> > authorize
> > > ALTER_USER_SCRAM_CREDENTIALS via the same ALTER CLUSTER operation.  And
> > > then with respect to delegation tokens, I think we would either need to
> > > allow delegation tokens to do both or we should prevent delegation
> tokens
> > > from altering credentials.  And then that gets to Colin's point about
> > > whether sessions authenticated via delegation token should be
> > second-class
> > > in some way, which I am inclined to think they should not.
> > >
> > > Ron
> > >
> > >
> > > On Wed, Sep 2, 2020 at 11:23 AM Rajini Sivaram <
> rajinisiva...@gmail.com>
> > > wrote:
> > >
> > > > Hi Ron/Colin,
> > > >
> > > > Without any restrictions, if delegation tokens can be used to create
> > new
> > > > users or change the password of the user you are impersonating, you
> > also
> > > > get the power to create/renew a new token by authenticating as a
> SCRAM
> > > user
> > > > you just created or updated. It seems like a new power that we are
> > > granting
> > > > in a new API using an existing ACL. User management is a new class of
> > > > operations we are adding to the Kafka protocol. An alternative to
> > > > restricting delegation tokens would be to add a new ACL 

Re: [DISCUSS] KIP-663: API to Start and Shut Down Stream Threads and to Request Closing of Kafka Streams Clients

2020-09-02 Thread Matthias J. Sax
Thanks for updating the KIP.

Why do you propose to return `boolean` from addStreamThread() if the
thread could not be started? As an alternative, we could also throw an
exception if the client is not in state RUNNING? -- I guess both are
valid options: just want to see what the pros/cons of each approach
would be?

Btw: should we allow to add a new thread if the state is REBALANCING,
too? I actually don't see a reason why we should not allow this?

For removeStreamThread(), might it be worth to actually guarantee that
the thread with the largest index is stopped instead of leaving if
unspecified? It does not seem to be a big burden on the implementation
and given that we plan to reused indices of died threads, it might be
nice to have a contract? Or would there be any negative impact if we
guarantee it?

Another thought: should we add a parameter `numberOfThreads` to each
method to allow users to start/stop multiple threads at once?

What happens if there is zero running threads and one calls
removeStreamThread()? Should we also add a `boolean` flag and return
`false` for this case (or throw an exception)?


For the metric name, I would prefer "failed" over "crashed". Thoughts?


Side remark for the PR: can we make sure to update the description of
`num.stream.threads` to explain that it's the _initial_ number of
threads on startup?


-Matthias


On 9/1/20 2:01 PM, Walker Carlson wrote:
> Hi Bruno,
> 
> I read through your updated KIP and it looks good to me. I agree with
> adding the metric to keep track of crashed streams in replace of a list of
> dead streams.
> 
> best,
> Wlaker :)
> 
> On Tue, Sep 1, 2020 at 1:05 PM Bruno Cadonna  wrote:
> 
>> Hi John,
>>
>> your proposal makes sense! I will update the KIP.
>>
>> Best,
>> Bruno
>>
>> On 01.09.20 17:31, John Roesler wrote:
>>> Hello Bruno,
>>>
>>> Thanks for the update! The KIP looks good to me; I only have
>>> a grammatical complaint about the proposed metric name.
>>>
>>> "Died" is a verb, the past tense of "to die", but in the
>>> expression,"x stream threads", x should be an adjective. To
>>> be fair, "died" is also the past participle of "to die", and
>>> participles can usually be used as adjectives. Maybe it
>>> sounds wrong to me because there's already a specifically
>>> adjectival form: "dead". So "dead-stream-threads" seems more
>>> natural.
>>>
>>> However, I'm not sure if that captures the specific meaning
>>> you're shooting for, namely that the metric counts only the
>>> threads that died exceptionally, vs. from calling
>>> "removeStreamThread()". What do you think of "crashed-
>>> stream-threads" instead?
>>>
>>> Thanks,
>>> -John
>>>
>>> On Tue, 2020-09-01 at 11:30 +0200, Bruno Cadonna wrote:
 Hi,

 I updated the KIP with the feedback so far. I removed the API to close
 the Kafka Streams client asynchronously, since it should be possible to
 avoid the deadlock with the existing method and without a KIP.

 Please have a look at the updated KIP and let me know what you think.


>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads

 Best,
 Bruno

 On 26.08.20 16:31, Bruno Cadonna wrote:
> Hi,
>
> I would like to propose the following KIP to start and shut down stream
> threads during execution as well as to shut down asynchronously a Kafka
> Streams client from an uncaught exception handler.
>
>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-663%3A+API+to+Start+and+Shut+Down+Stream+Threads+and+to+Request+Closing+of+Kafka+Streams+Clients
>
>
> Best,
> Bruno
>>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: [VOTE] KIP-612: Ability to Limit Connection Creation Rate on Brokers

2020-09-02 Thread David Jacot
Hi Anna,

Thanks for the update.

If you use Token Bucket, it will expose another metric which reports the
number of remaining tokens in the bucket, in addition to the current rate
metric. It would be great to add it in the metrics section of the KIP as
well
for completeness.

Best,
David

On Tue, Aug 11, 2020 at 4:28 AM Anna Povzner  wrote:

> Hi All,
>
> I wanted to let everyone know that we would like to make the following
> changes to the KIP:
>
>1.
>
>Expose connection acceptance rate metrics (broker-wide and per-listener)
>and per-listener average throttle time metrics for better observability
> and
>debugging.
>2.
>
>KIP-599 introduced a new implementation of MeasurableStat that
>implements a token bucket, which improves rate throttling for bursty
>workloads (KAFKA-10162). We would like to use this same mechanism for
>connection accept rate throttling.
>
>
> I updated the KIP to reflect these changes.
>
> Let me know if you have any concerns.
>
> Thanks,
>
> Anna
>
>
> On Thu, May 21, 2020 at 5:42 PM Anna Povzner  wrote:
>
> > The vote for KIP-612 has passed with 3 binding and 3 non-binding +1s, and
> > no objections.
> >
> >
> > Thanks everyone for reviews and feedback,
> >
> > Anna
> >
> > On Tue, May 19, 2020 at 2:41 AM Rajini Sivaram 
> > wrote:
> >
> >> +1 (binding)
> >>
> >> Thanks for the KIP, Anna!
> >>
> >> Regards,
> >>
> >> Rajini
> >>
> >>
> >> On Tue, May 19, 2020 at 9:32 AM Alexandre Dupriez <
> >> alexandre.dupr...@gmail.com> wrote:
> >>
> >> > +1 (non-binding)
> >> >
> >> > Thank you for the KIP!
> >> >
> >> >
> >> > Le mar. 19 mai 2020 à 07:57, David Jacot  a
> écrit
> >> :
> >> > >
> >> > > +1 (non-binding)
> >> > >
> >> > > Thanks for the KIP, Anna!
> >> > >
> >> > > On Tue, May 19, 2020 at 7:12 AM Satish Duggana <
> >> satish.dugg...@gmail.com
> >> > >
> >> > > wrote:
> >> > >
> >> > > > +1 (non-binding)
> >> > > > Thanks Anna for the nice feature to control the connection
> creation
> >> > rate
> >> > > > from the clients.
> >> > > >
> >> > > > On Tue, May 19, 2020 at 8:16 AM Gwen Shapira 
> >> > wrote:
> >> > > >
> >> > > > > +1 (binding)
> >> > > > >
> >> > > > > Thank you for driving this, Anna
> >> > > > >
> >> > > > > On Mon, May 18, 2020 at 4:55 PM Anna Povzner  >
> >> > wrote:
> >> > > > >
> >> > > > > > Hi All,
> >> > > > > >
> >> > > > > > I would like to start the vote on KIP-612: Ability to limit
> >> > connection
> >> > > > > > creation rate on brokers.
> >> > > > > >
> >> > > > > > For reference, here is the KIP wiki:
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-612%3A+Ability+to+Limit+Connection+Creation+Rate+on+Brokers
> >> > > > > >
> >> > > > > > And discussion thread:
> >> > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> >
> >>
> https://lists.apache.org/thread.html/r61162661fa307d0bc5c8326818bf223a689c49e1c828c9928ee26969%40%3Cdev.kafka.apache.org%3E
> >> > > > > >
> >> > > > > > Thanks,
> >> > > > > >
> >> > > > > > Anna
> >> > > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Gwen Shapira
> >> > > > > Engineering Manager | Confluent
> >> > > > > 650.450.2760 | @gwenshap
> >> > > > > Follow us: Twitter | blog
> >> > > > >
> >> > > >
> >> >
> >>
> >
>


Request for KIP creation access

2020-09-02 Thread satyanarayan komandur
Hi,
I would like to have the access for creating KIP for Kafka. My wiki id is
ksbalan2016. I have opened a Jira ticket and had brief interaction with
Mathias on stack overflow about this ticket. He would like me to open a KIp
in this regards.

Thanks for your help
Please let me know if you have additional questions

Thanks again
Balan


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #42

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10272: Add IBM i support to "stop" scripts (#9023)

[github] MINOR: Include call name in TimeoutException (#8050)


--
[...truncated 3.25 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > 

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-09-02 Thread Ron Dagostino
Thanks, Rajini.  That's a good point about authorizing user creation and
ACL creation separately to enable that separation as an *option* -- I agree
with that, which I think argues for a separate ALTER_USER operation (or
ALTER_USER_CREDENTIAL operation; not sure what the best name is, but
probably the first one since it is shorter and these names tend to be
short).  If we separate it out like this, then I don't think there is a
need to restrict delegation tokens from being able to
ALTER_USER_SCRAM_CREDENTIALS -- since it is possible to simply never
delegate authority to act as a user with those permissions.  Do you agree
with that, or do you still believe we should explicitly restrict delegation
tokens from ALTER_USER_SCRAM_CREDENTIALS?  I personally still agree with
Colin's point about avoiding second-class citizenry.

That's also a good point about users perhaps wanting to change their own
password.  I don't think that has come up.  If we were to add this, then it
would be the case that a user would be authorized to change their own
password at all times.  But in this case I think we would have to restrict
delegation tokens from changing the password of the underlying credential
since the user doesn't know the password to that account -- and due to that
lack of knowledge I think this is not a case of being a second-class
citizen and the restriction is justifiable.

So, to summarize, I am tentatively proposing the following:

1) ALTER_USER_SCRAM_CREDENTIALS is allowed on any credential if a new
ALTER_USER operation on the CLUSTER resource is authorized -- even if
authenticated via delegation token
2) Assuming (1) does not apply, ALTER_USER_SCRAM_CREDENTIALS is also
allowed if the only alterations requested are alterations to one or more
credentials associated with the authenticated user making the request, with
the added caveat that the authentication must not have been via delegation
token (i.e. you can't alter credentials, in the absence of (1) above, for
the user who delegated their authority to you).

Thoughts?

Ron


On Wed, Sep 2, 2020 at 2:16 PM Rajini Sivaram 
wrote:

> Hi Ron,
>
> Not sure the person who creates/manages users is always the person who
> controls access to Kafka resources. Separate ACLs gives the flexibility to
> keep them separate while you can still grant both to the user, while a
> combined ACL means that they can only be granted together. A related
> question is about password change. In the current model, User:Alice
> authenticated as Alice cannot change Alice's password without Alter
> permissions on the Cluster. The delegation token model where a user has
> more control over their own credential seems more appropriate in this case.
> Not sure if we considered and rejected that approach.
>
>
> On Wed, Sep 2, 2020 at 5:57 PM Ron Dagostino  wrote:
>
> > Hi Rajini.  Thanks for the explanation.
> >
> > I think these are the APIs that are authorized via the ALTER CLUSTER
> > operation, none of which are restricted when authenticating via
> delegation
> > token:
> >
> > ALTER_PARTITION_REASSIGNMENTS
> > ALTER_REPLICA_LOG_DIRS
> > CREATE_ACL
> > DELETE_ACL
> > ELECT_LEADERS
> >
> > I think if we are going to ALTER_USER_SCRAM_CREDENTIALS then we are
> likely
> > going to want to CREATE_ACL as well -- it feels like there's no sense in
> > creating a user but then not being able to authorize the user to do
> > anything.  (Unless I am wrong here?). If this is correct, then
> > following that to its logical conclusion, it feels like we should
> authorize
> > ALTER_USER_SCRAM_CREDENTIALS via the same ALTER CLUSTER operation.  And
> > then with respect to delegation tokens, I think we would either need to
> > allow delegation tokens to do both or we should prevent delegation tokens
> > from altering credentials.  And then that gets to Colin's point about
> > whether sessions authenticated via delegation token should be
> second-class
> > in some way, which I am inclined to think they should not.
> >
> > Ron
> >
> >
> > On Wed, Sep 2, 2020 at 11:23 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Ron/Colin,
> > >
> > > Without any restrictions, if delegation tokens can be used to create
> new
> > > users or change the password of the user you are impersonating, you
> also
> > > get the power to create/renew a new token by authenticating as a SCRAM
> > user
> > > you just created or updated. It seems like a new power that we are
> > granting
> > > in a new API using an existing ACL. User management is a new class of
> > > operations we are adding to the Kafka protocol. An alternative to
> > > restricting delegation tokens would be to add a new ACL operation
> instead
> > > of reusing `Alter` for user management : `AlterUsers/DescribeUsers`
> (like
> > > AlterConfigs/DescribeConfigs).
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > >
> > > On Wed, Sep 2, 2020 at 12:24 AM Colin McCabe 
> wrote:
> > >
> > > > Hi Ron,
> > > >
> > > > Thanks.  We can wait for Rajini's reply to finalize this, but for

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #39

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10098: Remove unnecessary escaping in regular expression. (#8798)

[github] KAFKA-10272: Add IBM i support to "stop" scripts (#9023)

[github] MINOR: Include call name in TimeoutException (#8050)


--
[...truncated 3.22 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-09-02 Thread Rajini Sivaram
Hi Ron,

Not sure the person who creates/manages users is always the person who
controls access to Kafka resources. Separate ACLs gives the flexibility to
keep them separate while you can still grant both to the user, while a
combined ACL means that they can only be granted together. A related
question is about password change. In the current model, User:Alice
authenticated as Alice cannot change Alice's password without Alter
permissions on the Cluster. The delegation token model where a user has
more control over their own credential seems more appropriate in this case.
Not sure if we considered and rejected that approach.


On Wed, Sep 2, 2020 at 5:57 PM Ron Dagostino  wrote:

> Hi Rajini.  Thanks for the explanation.
>
> I think these are the APIs that are authorized via the ALTER CLUSTER
> operation, none of which are restricted when authenticating via delegation
> token:
>
> ALTER_PARTITION_REASSIGNMENTS
> ALTER_REPLICA_LOG_DIRS
> CREATE_ACL
> DELETE_ACL
> ELECT_LEADERS
>
> I think if we are going to ALTER_USER_SCRAM_CREDENTIALS then we are likely
> going to want to CREATE_ACL as well -- it feels like there's no sense in
> creating a user but then not being able to authorize the user to do
> anything.  (Unless I am wrong here?). If this is correct, then
> following that to its logical conclusion, it feels like we should authorize
> ALTER_USER_SCRAM_CREDENTIALS via the same ALTER CLUSTER operation.  And
> then with respect to delegation tokens, I think we would either need to
> allow delegation tokens to do both or we should prevent delegation tokens
> from altering credentials.  And then that gets to Colin's point about
> whether sessions authenticated via delegation token should be second-class
> in some way, which I am inclined to think they should not.
>
> Ron
>
>
> On Wed, Sep 2, 2020 at 11:23 AM Rajini Sivaram 
> wrote:
>
> > Hi Ron/Colin,
> >
> > Without any restrictions, if delegation tokens can be used to create new
> > users or change the password of the user you are impersonating, you also
> > get the power to create/renew a new token by authenticating as a SCRAM
> user
> > you just created or updated. It seems like a new power that we are
> granting
> > in a new API using an existing ACL. User management is a new class of
> > operations we are adding to the Kafka protocol. An alternative to
> > restricting delegation tokens would be to add a new ACL operation instead
> > of reusing `Alter` for user management : `AlterUsers/DescribeUsers` (like
> > AlterConfigs/DescribeConfigs).
> >
> > Regards,
> >
> > Rajini
> >
> >
> > On Wed, Sep 2, 2020 at 12:24 AM Colin McCabe  wrote:
> >
> > > Hi Ron,
> > >
> > > Thanks.  We can wait for Rajini's reply to finalize this, but for now I
> > > guess that will unblock the PR at least.  If we do decide we want the
> > > restriction we can do a follow-on PR.
> > >
> > > It's good to see this API moving forward!
> > >
> > > best,
> > > Colin
> > >
> > >
> > > On Tue, Sep 1, 2020, at 12:55, Ron Dagostino wrote:
> > > > Hi Colin.  I've removed that requirement from the KIP and updated the
> > PR
> > > > accordingly.
> > > >
> > > > Ron
> > > >
> > > > On Fri, Aug 28, 2020 at 2:27 PM Colin McCabe 
> > wrote:
> > > >
> > > > > Hi Ron,
> > > > >
> > > > > Thanks for the update.  I agree with all of these changes, except I
> > > think
> > > > > we should discuss this one further:
> > > > >
> > > > > On Wed, Aug 26, 2020, at 14:59, Ron Dagostino wrote:
> > > > > >
> > > > > > 2. We added a restriction to not allow users who authenticated
> > using
> > > > > > delegation tokens to create or update user SCRAM credentials. We
> > > don't
> > > > > > allow such authenticated users to create new tokens, and it would
> > be
> > > odd
> > > > > if
> > > > > > they could create a new user or change the password of the user
> for
> > > the
> > > > > > token.
> > > > > >
> > > > >
> > > > > I don't think these two restrictions are comparable.  It seems to
> me
> > > that
> > > > > we forbid creating a new token based on an existing token in order
> to
> > > force
> > > > > users of delegation tokens to re-authenticate periodically through
> > the
> > > > > regular auth system.  If they could just create a new token based
> on
> > > their
> > > > > old token, there would be an obvious "wishing for more wishes"
> > problem
> > > and
> > > > > they could just sidestep the regular authentication system entirely
> > > once
> > > > > they had a token.
> > > > >
> > > > > This issue doesn't exist here, since creating a new SCRAM user
> > doesn't
> > > do
> > > > > anything to extend the life of the existing delegation token.  If
> the
> > > user
> > > > > has the permission to change SCRAM users, I don't see any reason
> why
> > we
> > > > > should forbid them from doing just that.  Users of delegation
> tokens
> > > > > shouldn't be second-class citizens. A user with ALTER on CLUSTER
> > should
> > > > > have all the permissions associated with ALTER on CLUSTER,
> regardless
> > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #41

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10098: Remove unnecessary escaping in regular expression. (#8798)

[github] KAFKA-10272: Add IBM i support to "stop" scripts (#9023)


--
[...truncated 3.25 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #41

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10098: Remove unnecessary escaping in regular expression. (#8798)


--
[...truncated 3.25 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > 

Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-09-02 Thread Ron Dagostino
Hi Rajini.  Thanks for the explanation.

I think these are the APIs that are authorized via the ALTER CLUSTER
operation, none of which are restricted when authenticating via delegation
token:

ALTER_PARTITION_REASSIGNMENTS
ALTER_REPLICA_LOG_DIRS
CREATE_ACL
DELETE_ACL
ELECT_LEADERS

I think if we are going to ALTER_USER_SCRAM_CREDENTIALS then we are likely
going to want to CREATE_ACL as well -- it feels like there's no sense in
creating a user but then not being able to authorize the user to do
anything.  (Unless I am wrong here?). If this is correct, then
following that to its logical conclusion, it feels like we should authorize
ALTER_USER_SCRAM_CREDENTIALS via the same ALTER CLUSTER operation.  And
then with respect to delegation tokens, I think we would either need to
allow delegation tokens to do both or we should prevent delegation tokens
from altering credentials.  And then that gets to Colin's point about
whether sessions authenticated via delegation token should be second-class
in some way, which I am inclined to think they should not.

Ron


On Wed, Sep 2, 2020 at 11:23 AM Rajini Sivaram 
wrote:

> Hi Ron/Colin,
>
> Without any restrictions, if delegation tokens can be used to create new
> users or change the password of the user you are impersonating, you also
> get the power to create/renew a new token by authenticating as a SCRAM user
> you just created or updated. It seems like a new power that we are granting
> in a new API using an existing ACL. User management is a new class of
> operations we are adding to the Kafka protocol. An alternative to
> restricting delegation tokens would be to add a new ACL operation instead
> of reusing `Alter` for user management : `AlterUsers/DescribeUsers` (like
> AlterConfigs/DescribeConfigs).
>
> Regards,
>
> Rajini
>
>
> On Wed, Sep 2, 2020 at 12:24 AM Colin McCabe  wrote:
>
> > Hi Ron,
> >
> > Thanks.  We can wait for Rajini's reply to finalize this, but for now I
> > guess that will unblock the PR at least.  If we do decide we want the
> > restriction we can do a follow-on PR.
> >
> > It's good to see this API moving forward!
> >
> > best,
> > Colin
> >
> >
> > On Tue, Sep 1, 2020, at 12:55, Ron Dagostino wrote:
> > > Hi Colin.  I've removed that requirement from the KIP and updated the
> PR
> > > accordingly.
> > >
> > > Ron
> > >
> > > On Fri, Aug 28, 2020 at 2:27 PM Colin McCabe 
> wrote:
> > >
> > > > Hi Ron,
> > > >
> > > > Thanks for the update.  I agree with all of these changes, except I
> > think
> > > > we should discuss this one further:
> > > >
> > > > On Wed, Aug 26, 2020, at 14:59, Ron Dagostino wrote:
> > > > >
> > > > > 2. We added a restriction to not allow users who authenticated
> using
> > > > > delegation tokens to create or update user SCRAM credentials. We
> > don't
> > > > > allow such authenticated users to create new tokens, and it would
> be
> > odd
> > > > if
> > > > > they could create a new user or change the password of the user for
> > the
> > > > > token.
> > > > >
> > > >
> > > > I don't think these two restrictions are comparable.  It seems to me
> > that
> > > > we forbid creating a new token based on an existing token in order to
> > force
> > > > users of delegation tokens to re-authenticate periodically through
> the
> > > > regular auth system.  If they could just create a new token based on
> > their
> > > > old token, there would be an obvious "wishing for more wishes"
> problem
> > and
> > > > they could just sidestep the regular authentication system entirely
> > once
> > > > they had a token.
> > > >
> > > > This issue doesn't exist here, since creating a new SCRAM user
> doesn't
> > do
> > > > anything to extend the life of the existing delegation token.  If the
> > user
> > > > has the permission to change SCRAM users, I don't see any reason why
> we
> > > > should forbid them from doing just that.  Users of delegation tokens
> > > > shouldn't be second-class citizens. A user with ALTER on CLUSTER
> should
> > > > have all the permissions associated with ALTER on CLUSTER, regardless
> > of if
> > > > they logged in with Kerberos, delegation tokens, SCRAM, etc. etc.  I
> > don't
> > > > think the proposed restriction you mention here is consistent with
> > that.
> > > >
> > > > best,
> > > > Colin
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-10129) Fail QA if there are javadoc warnings

2020-09-02 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-10129.

Resolution: Won't Fix

> Fail QA if there are javadoc warnings
> -
>
> Key: KAFKA-10129
> URL: https://issues.apache.org/jira/browse/KAFKA-10129
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
>
> from [~hachikuji] 
> (https://github.com/apache/kafka/pull/8660#pullrequestreview-425856179)
> {quote}
> One other question I had is whether we should consider making doc failures 
> also fail the build?
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Regarding Contributor access

2020-09-02 Thread Mickael Maison
Done. Thanks for your interest in Apache Kafka!

On Wed, Sep 2, 2020 at 6:01 PM Satyawati Tripathi
 wrote:
>
> I want to work on Jira : https://issues.apache.org/jira/browse/KAFKA-10375
>
> Username: Satyatr
>
> On Wed, Sep 2, 2020 at 4:35 PM Satyawati Tripathi <
> satyawatitripathi@gmail.com> wrote:
>
> > Hi Team,
> >
> > Please provide me contributor access on Jira for Kafka Project.
> >
> > Thanks,
> > Satyawati Tripathi
> >


Re: Request for contributor Access to JIRA

2020-09-02 Thread Mickael Maison
Done. Thanks for your interest in Apache Kafka!

On Wed, Sep 2, 2020 at 6:01 PM Anoop Shenoy  wrote:
>
> Dear Team,
>
> I am a developer and interested in contributing for the Open Source
> community.
>
> JIRA ID: *anoop.shenoy*
>
> Please do the needful.
>
> Regards,
> Anoop


Request for contributor Access to JIRA

2020-09-02 Thread Anoop Shenoy
Dear Team,

I am a developer and interested in contributing for the Open Source
community.

JIRA ID: *anoop.shenoy*

Please do the needful.

Regards,
Anoop


Re: Regarding Contributor access

2020-09-02 Thread Satyawati Tripathi
I want to work on Jira : https://issues.apache.org/jira/browse/KAFKA-10375

Username: Satyatr

On Wed, Sep 2, 2020 at 4:35 PM Satyawati Tripathi <
satyawatitripathi@gmail.com> wrote:

> Hi Team,
>
> Please provide me contributor access on Jira for Kafka Project.
>
> Thanks,
> Satyawati Tripathi
>


Regarding Contributor access

2020-09-02 Thread Satyawati Tripathi
Hi Team,

Please provide me contributor access on Jira for Kafka Project.

Thanks,
Satyawati Tripathi


[jira] [Resolved] (KAFKA-10272) kafka-server-stop.sh fails on IBM i

2020-09-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10272.

Fix Version/s: 2.7.0
   Resolution: Fixed

> kafka-server-stop.sh fails on IBM i
> ---
>
> Key: KAFKA-10272
> URL: https://issues.apache.org/jira/browse/KAFKA-10272
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.5.0
> Environment: IBM i 7.2
>Reporter: Jesse Gorzinski
>Assignee: Jesse Gorzinski
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.7.0
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> On the IBM i platform, the `kafka-server-stop.sh` script always fails with an 
> error message "No kafka server to stop"
>  
> The underlying cause is because the script relies on the output of `ps ax` to 
> determine the pid. More specifically:
> {code:java}
> PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '{print $1}')
> {code}
> On IBM i, the ps utility is unconventional and truncates the output with 
> these arguments. For instance, here is part of the ps output
> {code:java}
>  584329  - A 0:00 /QOpenSys/QIBM/ProdData/SC1/OpenSSH/sbin/sshd -R
>  584331  - A 0:00 
> /QOpenSys/QIBM/ProdData/SC1/OpenSSH/libexec/sftp-serv
>  584332  - A 0:00 /QOpenSys/QIBM/ProdData/SC1/OpenSSH/sbin/sshd -R
>  584334  pts/5 A 0:00 -bash
>  584365  pts/7 A 0:08 java -Xmx512M -Xms512M -server -XX:+UseG1GC 
> -XX:MaxGC
>  585353  pts/8 A 0:12 java -Xmx1G -Xms1G -server -XX:+UseG1GC 
> -XX:MaxGCPaus
>  585690  pts/9 A 0:00 ps ax
> {code}
>  
> Therefore, the resultant grep always comes up empty. When invoked with `ps 
> -af`, it gives the whole command (when piped) but sticks in the UID by 
> default 
> {code:java}
> ps -af
>  UIDPID   PPID   CSTIMETTY  TIME CMD
> jgorzins 585353 583321   0 12:41:07  pts/8  0:41 java -Xmx1G -Xms1G -server 
> -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 
> -XX:+ExplicitGCInvokesConcurr
> jgorzins 585817 585794   0 14:44:25  pts/4  0:00 ps -af
> {code}
> So the following PID check works for IBM i:
>  
> {code:java}
> PIDS=$(ps -af | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '{print $2}')
> {code}
> so, a fix would be (I have verified this):
> {code:java}
> if [[ "OS400" == $(uname -s) ]]; then
>   PIDS=$(ps -af | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '{print $2}')
> else
>   PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk 
> '{print $1}')
> fi
> {code}
> This all also applies to `zookeeper-server-stop.sh`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10098) Remove unnecessary escaping in regular expression in SaslAuthenticatorTest.java

2020-09-02 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-10098.

Fix Version/s: 2.7.0
   Resolution: Fixed

> Remove unnecessary escaping in regular expression in 
> SaslAuthenticatorTest.java
> ---
>
> Key: KAFKA-10098
> URL: https://issues.apache.org/jira/browse/KAFKA-10098
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Can Cecen
>Assignee: Can Cecen
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.7.0
>
>
> n.b. This is a newbie ticket designed to be an introduction to contributing 
> for the assignee.
> In line:
> {code}
> e.getMessage().matches(
> ".*\\<\\[" + expectedResponseTextRegex + 
> "]>.*\\<\\[" + receivedResponseTextRegex + ".*?]>"));
> {code}
> '<' or '>' does not need to be escaped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-554: Add Broker-side SCRAM Config API

2020-09-02 Thread Rajini Sivaram
Hi Ron/Colin,

Without any restrictions, if delegation tokens can be used to create new
users or change the password of the user you are impersonating, you also
get the power to create/renew a new token by authenticating as a SCRAM user
you just created or updated. It seems like a new power that we are granting
in a new API using an existing ACL. User management is a new class of
operations we are adding to the Kafka protocol. An alternative to
restricting delegation tokens would be to add a new ACL operation instead
of reusing `Alter` for user management : `AlterUsers/DescribeUsers` (like
AlterConfigs/DescribeConfigs).

Regards,

Rajini


On Wed, Sep 2, 2020 at 12:24 AM Colin McCabe  wrote:

> Hi Ron,
>
> Thanks.  We can wait for Rajini's reply to finalize this, but for now I
> guess that will unblock the PR at least.  If we do decide we want the
> restriction we can do a follow-on PR.
>
> It's good to see this API moving forward!
>
> best,
> Colin
>
>
> On Tue, Sep 1, 2020, at 12:55, Ron Dagostino wrote:
> > Hi Colin.  I've removed that requirement from the KIP and updated the PR
> > accordingly.
> >
> > Ron
> >
> > On Fri, Aug 28, 2020 at 2:27 PM Colin McCabe  wrote:
> >
> > > Hi Ron,
> > >
> > > Thanks for the update.  I agree with all of these changes, except I
> think
> > > we should discuss this one further:
> > >
> > > On Wed, Aug 26, 2020, at 14:59, Ron Dagostino wrote:
> > > >
> > > > 2. We added a restriction to not allow users who authenticated using
> > > > delegation tokens to create or update user SCRAM credentials. We
> don't
> > > > allow such authenticated users to create new tokens, and it would be
> odd
> > > if
> > > > they could create a new user or change the password of the user for
> the
> > > > token.
> > > >
> > >
> > > I don't think these two restrictions are comparable.  It seems to me
> that
> > > we forbid creating a new token based on an existing token in order to
> force
> > > users of delegation tokens to re-authenticate periodically through the
> > > regular auth system.  If they could just create a new token based on
> their
> > > old token, there would be an obvious "wishing for more wishes" problem
> and
> > > they could just sidestep the regular authentication system entirely
> once
> > > they had a token.
> > >
> > > This issue doesn't exist here, since creating a new SCRAM user doesn't
> do
> > > anything to extend the life of the existing delegation token.  If the
> user
> > > has the permission to change SCRAM users, I don't see any reason why we
> > > should forbid them from doing just that.  Users of delegation tokens
> > > shouldn't be second-class citizens. A user with ALTER on CLUSTER should
> > > have all the permissions associated with ALTER on CLUSTER, regardless
> of if
> > > they logged in with Kerberos, delegation tokens, SCRAM, etc. etc.  I
> don't
> > > think the proposed restriction you mention here is consistent with
> that.
> > >
> > > best,
> > > Colin
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #40

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] https://issues.apache.org/jira/browse/KAFKA-10456 (#9240)


--
[...truncated 3.25 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithCaching PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithPersistentStore PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #40

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] https://issues.apache.org/jira/browse/KAFKA-10456 (#9240)


--
[...truncated 3.25 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testConsumerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testToString STARTED

org.apache.kafka.streams.test.TestRecordTest > testToString PASSED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords STARTED

org.apache.kafka.streams.test.TestRecordTest > testInvalidRecords PASSED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
STARTED

org.apache.kafka.streams.test.TestRecordTest > testPartialConstructorEquals 
PASSED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher STARTED

org.apache.kafka.streams.test.TestRecordTest > testMultiFieldMatcher PASSED

org.apache.kafka.streams.test.TestRecordTest > testFields STARTED

org.apache.kafka.streams.test.TestRecordTest > testFields PASSED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord STARTED

org.apache.kafka.streams.test.TestRecordTest > testProducerRecord PASSED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode STARTED

org.apache.kafka.streams.test.TestRecordTest > testEqualsAndHashCode PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #38

2020-09-02 Thread Apache Jenkins Server
See 


Changes:

[github] https://issues.apache.org/jira/browse/KAFKA-10456 (#9240)


--
[...truncated 3.23 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureGlobalTopicNameIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED


[jira] [Resolved] (KAFKA-10456) wrong description in kafka-console-producer.sh help

2020-09-02 Thread huxihx (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huxihx resolved KAFKA-10456.

Fix Version/s: 2.7.0
   Resolution: Fixed

> wrong description in kafka-console-producer.sh help
> ---
>
> Key: KAFKA-10456
> URL: https://issues.apache.org/jira/browse/KAFKA-10456
> Project: Kafka
>  Issue Type: Task
>  Components: producer 
>Affects Versions: 2.6.0
> Environment: linux
>Reporter: danilo batista queiroz
>Assignee: huxihx
>Priority: Trivial
>  Labels: documentation
> Fix For: 2.7.0
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> file: core/src/main/scala/kafka/tools/ConsoleProducer.scala
> In line 151, the description of "message-send-max-retries" has a text: 
> 'retires', and the correct is 'retries'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)