Re: [VOTE] KIP-653: Upgrade log4j to log4j2

2020-10-06 Thread Dongjin Lee
As of present:

- Binding: +2 (Gwen, John)
- Non-binding: +1 (David)

Now we need one more binding +1.

Thanks,
Dongjin

On Wed, Oct 7, 2020 at 1:37 AM David Jacot  wrote:

> Thanks for driving this, Dongjin!
>
> The KIP looks good to me. I’m +1 (non-binding).
>
> Best,
> David
>
> Le mar. 6 oct. 2020 à 17:23, Dongjin Lee  a écrit :
>
> > As of present:
> >
> > - Binding: +2 (Gwen, John)
> > - Non-binding: 0
> >
> > Thanks,
> > Dongjin
> >
> > On Sat, Oct 3, 2020 at 10:51 AM John Roesler 
> wrote:
> >
> > > Thanks for the KIP, Dongjin!
> > >
> > > I’ve just reviewed the KIP document, and it looks good to me.
> > >
> > > I’m +1 (binding)
> > >
> > > Thanks,
> > > John
> > >
> > > On Fri, Oct 2, 2020, at 19:11, Gwen Shapira wrote:
> > > > +1 (binding)
> > > >
> > > > A very welcome update :)
> > > >
> > > > On Tue, Sep 22, 2020 at 9:09 AM Dongjin Lee 
> > wrote:
> > > > >
> > > > > Hi devs,
> > > > >
> > > > > Here I open the vote for KIP-653: Upgrade log4j to log4j2. It
> > replaces
> > > the
> > > > > obsolete log4j logging library into the current standard, log4j2,
> > with
> > > > > maintaining backward-compatibility.
> > > > >
> > > > > Thanks,
> > > > > Dongjin
> > > > >
> > > > > --
> > > > > *Dongjin Lee*
> > > > >
> > > > > *A hitchhiker in the mathematical world.*
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > *github:  github.com/dongjinleekr
> > > > > keybase:
> > > https://keybase.io/dongjinleekr
> > > > > linkedin:
> > > kr.linkedin.com/in/dongjinleekr
> > > > > speakerdeck:
> > > speakerdeck.com/dongjin
> > > > > *
> > > >
> > > >
> > > >
> > > > --
> > > > Gwen Shapira
> > > > Engineering Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter | blog
> > > >
> > >
> >
> >
> > --
> > *Dongjin Lee*
> >
> > *A hitchhiker in the mathematical world.*
> >
> >
> >
> >
> > *github:  github.com/dongjinleekr
> > keybase:
> https://keybase.io/dongjinleekr
> > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > speakerdeck:
> > speakerdeck.com/dongjin
> > *
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-10-06 Thread Guozhang Wang
Sorry I'm late to the party.

Matthias raised a point to me regarding the recent development of moving
restoration from stream threads to separate restore threads and allowing
the stream threads to process any processible tasks even when some other
tasks are still being restored by the restore threads:

https://issues.apache.org/jira/browse/KAFKA-10526
https://issues.apache.org/jira/browse/KAFKA-10577

That would cause the restoration of non-global states to be more similar to
global states such that some tasks would be processed even though the state
of the stream thread is not yet in RUNNING (because today we only transit
to it when ALL assigned tasks have completed restoration and are
processible).

Also, as Sophie already mentioned, today during REBALANCING (in stream
thread level, it is PARTITION_REVOKED -> PARTITION_ASSIGNED) some tasks may
still be processed, and because of KIP-429 the RUNNING -> PARTITION_REVOKED
-> PARTITION_ASSIGNED can be within a single call and hence be very
"transient", whereas PARTITION_ASSIGNED -> RUNNING could still take time as
it only do the transition when all tasks are processible.

So I think it makes sense to add a RESTORING state at the stream client
level, defined as "at least one of the state stores assigned to this
client, either global or non-global, is still restoring", and emphasize
that during this state the client may still be able to process records,
just probably not in full-speed.

As for REBALANCING, I think it is a bit less relevant to this KIP but
here's a dump of my thoughts: if we can capture the period when "some tasks
do not belong to any clients and hence processing is not full-speed" it
would still be valuable, but unfortunately right now since
onPartitionRevoked is not triggered each time on all clients, today's
transition would just make a lot of very short REBALANCING state period
which is not very useful really. So if we still want to keep that state
maybe we can consider the following tweak: at the thread level, we replace
PARTITION_REVOKED / PARTITION_ASSIGNED with just a single REBALANCING
state, and we will transit to this state upon onPartitionRevoked, but we
will only transit out of this state upon onAssignment when the assignor
decides there's no follow-up rebalance immediately (note we also schedule
future rebalances for workload balancing, but that would still trigger
transiting out of it). On the client level, we would enter REBALANCING when
any threads enter REBALANCING and we would transit out of it when all
transits out of it. In this case, it is possible that during a rebalance,
only those clients that have to revoke some partition would enter the
REBALANCING state while others that only get additional tasks would not
enter this state at all.

With all that being said, I think the discussion around REBALANCING is less
relevant to this KIP, and even for RESTORING I honestly think maybe we can
make it in another KIP out of 406. It will, admittedly leave us in a
temporary phase where the FSM of Kafka Streams is not perfect, but that
helps making incremental development progress for 406 itself.


Guozhang


On Mon, Oct 5, 2020 at 2:37 PM Sophie Blee-Goldman 
wrote:

> It seems a little misleading, but I actually have no real qualms about
> transitioning to the
> REBALANCING state *after* RESTORING. One of the side effects of KIP-429 was
> that in
> most cases we actually don't transition to REBALANCING at all until the
> very end of the
> rebalance, so REBALANCING doesn't really mean all that much any more. These
> days
> the majority of the time an instance spends in the REBALANCING state is
> actually spent
> on restoration anyways.
>
> If users are listening in on the REBALANCING -> RUNNING transition, then
> they might
> also be listening for the RUNNING -> REBALANCING transition, so we may need
> to actually
> go RUNNING -> REBALANCING -> RESTORING -> REBALANCING -> RUNNING. This
> feels a bit unwieldy but I don't think there's anything specifically wrong
> with it.
>
> That said, it makes me question the value of having a REBALANCING state at
> all. In the
> pre-KIP-429 days it made sense, because all tasks were paused and
> unavailable for IQ
> for the duration of the rebalance. But these days, the threads can continue
> processing
> any tasks they own during a rebalance, so the only time that tasks are
> truly unavailable
> is during the restoration phase.
>
> So, I find the idea of getting rid of the REBALANCING state altogether to
> be pretty
> appealing, in which case we'd probably need to introduce a new state
> listener and
> deprecate the current one as John proposed. I also wonder if this is the
> sort of thing
> we can just swallow as a breaking change in the upcoming 3.0 release.
>
> On Sat, Oct 3, 2020 at 11:02 PM Navinder Brar
>  wrote:
>
> >
> >
> >
> > Thanks a lot, Matthias for detailed feedback. I tend to agree with
> > changing the state machine
> >
> > itself if required. I think at the end of the day 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #150

2020-10-06 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.5-jdk8 #15

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[Randall Hauch] KAFKA-10188: Prevent SinkTask::preCommit from being called 
after SinkTask::stop (#8910)


--
[...truncated 3.10 MB...]

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureApplicationAndRecordMetadata PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureRecordsOutputToChildByName PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCapturePunctuator 
PASSED

> Task :streams:upgrade-system-tests-0100:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0100:processResources NO-SOURCE
> Task 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #117

2020-10-06 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-10-06 Thread John Roesler
Hi Dongjin,

Yes, those were the APIs I was thinking of. I honestly didn’t think of it until 
now. Sorry about that.

I agree, a POC implementation would help us to see if this is a good choice for 
the kip. 

Thanks!
John

On Tue, Oct 6, 2020, at 10:21, Dongjin Lee wrote:
> You mean, the performance issue related to `#all` or `#range` query. Right?
> I reviewed the second approach (i.e., extending `ValueGetter`), and this
> approach is worth trying. Since KIP-508 was dropped from 2.7.0 release, we
> have enough time now.
> 
> Let me have a try. I think we can have a rough one by this weekend.
> 
> Regards,
> Dongjin
> 
> On Thu, Oct 1, 2020 at 4:52 AM John Roesler  wrote:
> 
> > Thanks Dongjin,
> >
> > It typically is nicer to be able to see usage examples, so
> > I'd certainly be in favor if you're willing to add it to the
> > KIP.
> >
> > I'm wondering if it's possible to implement the whole
> > ReadOnlyKeyValueStore interface as proposed, if we really go
> > ahead and just internally query into the suppression buffer
> > as well as using the upstream ValueGetter. The reason is
> > twofold:
> > 1. The suppression buffer is ordered by arrival time, not by
> > key. There is a by-key index, but it is also not ordered the
> > same way that in-memory stores are ordered. Thus, we'd have
> > a hard time implementing key-based range scans.
> > 2. The internal ValueGetter interface only supports get-by-
> > key lookups, so it would also need to be expanded to support
> > range scans on the parent table.
> >
> > Neither of these problems are insurmountable, but I'm
> > wondering if we _want_ to surmount them right now. Or should
> > we instead just throw an UnsupportedOperationException on
> > any API call that's inconvenient to implement right now?
> > Then, we could get incremental value by first supporting
> > (eg) key-based lookups and adding scans later.
> >
> > Or does this mean that our design so far is invalid, and we
> > should really just make people provision a separate
> > Materialized downstream? To pull this off, we'd need to
> > first address KIP-300's challenges, though.
> >
> > I'm honestly not sure what the right call is here.
> >
> > Thanks,
> > -John
> >
> > On Thu, 2020-10-01 at 01:50 +0900, Dongjin Lee wrote:
> > > > It seems like it must be a ReadOnlyKeyValueStore. Does that sound
> > right?
> > >
> > > Yes, it is. Would it be better to add a detailed description of how this
> > > feature effects interactive query, with examples?
> > >
> > > Best,
> > > Dongjin
> > >
> > > On Tue, Sep 29, 2020 at 10:31 AM John Roesler 
> > wrote:
> > >
> > > > Hi Dongjin,
> > > >
> > > > Thanks! Sorry, I missed your prior message. The proposed API looks
> > good to
> > > > me.
> > > >
> > > > I’m wondering if we should specify what kind of store view would be
> > > > returned when querying the operation result. It seems like it must be a
> > > > ReadOnlyKeyValueStore. Does that sound right?
> > > >
> > > > Thanks!
> > > > John
> > > >
> > > > On Mon, Sep 28, 2020, at 10:06, Dongjin Lee wrote:
> > > > > Hi John,
> > > > >
> > > > > I updated the KIP with the discussion above. The 'Public Interfaces'
> > > > > section describes the new API, and the 'Rejected Alternatives'
> > section
> > > > > describes the reasoning about why we selected this API design and
> > > > rejected
> > > > > the other alternatives.
> > > > >
> > > > > Please have a look when you are free. And please note that the KIP
> > freeze
> > > > > for 2.7.0 is imminent.
> > > > >
> > > > > Thanks,
> > > > > Dongjin
> > > > >
> > > > > On Mon, Sep 21, 2020 at 11:35 PM Dongjin Lee 
> > wrote:
> > > > >
> > > > > > Hi John,
> > > > > >
> > > > > > I updated the PR applying the API changes we discussed above. I am
> > now
> > > > > > updating the KIP document.
> > > > > >
> > > > > > Thanks,
> > > > > > Dongjin
> > > > > >
> > > > > > On Sat, Sep 19, 2020 at 10:42 AM John Roesler  > >
> > > > wrote:
> > > > > > > Hi Dongjin,
> > > > > > >
> > > > > > > Yes, that’s right. My the time of KIP-307, we had no choice but
> > to
> > > > add a
> > > > > > > second name. But we do have a choice with Suppress.
> > > > > > >
> > > > > > > Thanks!
> > > > > > > -John
> > > > > > >
> > > > > > > On Thu, Sep 17, 2020, at 13:14, Dongjin Lee wrote:
> > > > > > > > Hi John,
> > > > > > > >
> > > > > > > > I just reviewed KIP-307. As far as I understood, ...
> > > > > > > >
> > > > > > > > 1. There was Materialized name initially.
> > > > > > > > 2. With KIP-307, Named Operations were added.
> > > > > > > > 3. Now we have two options for materializing suppression. If
> > we take
> > > > > > > > Materialized name here, we have two names for the same
> > operation,
> > > > which
> > > > > > > is
> > > > > > > > not feasible.
> > > > > > > >
> > > > > > > > Do I understand correctly?
> > > > > > > >
> > > > > > > > > Do you have a use case in mind for having two separate names
> > for
> > > > the
> > > > > > > > operation and the view?
> > > > > > > >
> > > > > > > 

Re: KIP-675: Convert KTable to a KStream using the previous value

2020-10-06 Thread Matthias J. Sax
Thanks for the KIP.

I am not sure if I understand the motivation. In particular the KIP says:

> The main problem, apart from needing more code, is that if the same event is 
> received twice at the same time and the commit time is not 0, the difference 
> is deleted and nothing is emitted.

Can you elaborate? Maybe you can provide a concrete example? I don't
understand the relationship between "the same event is received twice"
and a "non-zero commit time".


-Matthias

On 10/6/20 6:25 AM, Javier Freire Riobo wrote:
> Hi all,
> 
> I'd like to propose these changes to the Kafka Streams API.
> 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-675%3A+Convert+KTable+to+a+KStream+using+the+previous+value
> 
> This is a proposal to convert a KTable to a KStream knowing the previous
> value of the registry.
> 
> I also opened a proof-of-concept PR:
> 
> PR#9321:  https://github.com/apache/kafka/pull/9381
> 
> What do you think?
> 
> Cheers,
> Javier Freire
> 


Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-06 Thread Jun Rao
Hi, Kowshik,

Thanks for the follow up. Both look good to me.

For 2, it would be useful to also add that an admin should make sure that
no clients are using a deprecated feature version (e.g. using the client
version metric) before deploying a release that deprecates it.

Thanks,

Jun

On Tue, Oct 6, 2020 at 3:46 PM Kowshik Prakasam 
wrote:

> Hi Jun,
>
> I have added the following details in the KIP-584 write up:
>
> 1. Deployment, IBP deprecation and avoidance of double rolls. This section
> talks about the various phases of work that would be required to use this
> KIP to eventually avoid Broker double rolls in the cluster (whenever IBP
> values are advanced). Link to section:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
> .
>
> 2. Feature version deprecation. This section explains the idea for feature
> version deprecation (using highest supported feature min version) which you
> had proposed during code review:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
> .
>
> Please let me know if you have any questions.
>
>
> Cheers,
> Kowshik
>
>
> On Tue, Sep 29, 2020 at 11:07 AM Jun Rao  wrote:
>
> > Hi, Kowshik,
> >
> > Thanks for the update. Regarding enabling a single rolling restart in the
> > future, could we sketch out a bit how this will work by treating IBP as a
> > feature? For example, IBP currently uses the release version and this KIP
> > uses an integer for versions. How do we bridge the gap between the two?
> > Does min.version still make sense for IBP as a feature?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Sep 25, 2020 at 5:57 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hi Colin,
> > >
> > > Thanks for the feedback. Those are very good points. I have made the
> > > following changes to the KIP as you had suggested:
> > > 1. Included the `timeoutMs` field in the `UpdateFeaturesRequest`
> schema.
> > > The initial implementation won't be making use of the field, but we can
> > > always use it in the future as the need arises.
> > > 2. Modified the `FinalizedFeaturesEpoch` field in `ApiVersionsResponse`
> > to
> > > use int64. This is to avoid overflow problems in the future once ZK is
> > > gone.
> > >
> > > I have also incorporated these changes into the versioning write path
> PR
> > > that is currently under review:
> > https://github.com/apache/kafka/pull/9001.
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > >
> > >
> > > On Fri, Sep 25, 2020 at 4:57 PM Kowshik Prakasam <
> kpraka...@confluent.io
> > >
> > > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > Thanks for the feedback. It's a very good point. I have now modified
> > the
> > > > KIP-584 write-up "goals" section a bit. It now mentions one of the
> > goals
> > > as
> > > > enabling rolling upgrades using a single restart (instead of 2).
> Also I
> > > > have removed the text explicitly aiming for deprecation of IBP. Note
> > that
> > > > previously under "Potential features in Kafka" the IBP was mentioned
> > > under
> > > > point (4) as a possible coarse-grained feature. Hopefully, now the 2
> > > > sections of the KIP align with each other well.
> > > >
> > > >
> > > > Cheers,
> > > > Kowshik
> > > >
> > > >
> > > > On Fri, Sep 25, 2020 at 2:03 PM Colin McCabe 
> > wrote:
> > > >
> > > >> On Tue, Sep 22, 2020, at 00:43, Kowshik Prakasam wrote:
> > > >> > Hi all,
> > > >> >
> > > >> > I wanted to let you know that I have made the following changes to
> > the
> > > >> > KIP-584 write up. The purpose is to ensure the design is correct
> > for a
> > > >> few
> > > >> > things which came up during implementation:
> > > >> >
> > > >>
> > > >> Hi Kowshik,
> > > >>
> > > >> Thanks for the updates.
> > > >>
> > > >> >
> > > >> > 1. Per FeatureUpdate error code: The UPDATE_FEATURES controller
> API
> > is
> > > >> no
> > > >> > longer transactional. Going forward, we allow for individual
> > > >> FeatureUpdate
> > > >> > to succeed/fail in the request. As a result, the response schema
> now
> > > >> > contains an error code per FeatureUpdate as well as a top-level
> > error
> > > >> code.
> > > >> > Overall this is a better design because it better represents the
> > > nature
> > > >> of
> > > >> > the API: each FeatureUpdate in the request is independent of the
> > other
> > > >> > updates, and the controller can process/apply these independently
> to
> > > ZK.
> > > >> > When an UPDATE_FEATURES request fails, this new design provides
> > better
> > > >> > clarity to the caller on which FeatureUpdate could not be applied
> > (via
> > > >> the
> > > >> > individual error codes). In the previous design, we were unable to
> > > >> achieve
> > > >> > such an increased level of clarity in communicating the error
> codes.
> > > >> >
> > > >>
> > > >> OK
> > > >>
> > > >> >
> > > >> > 2. Due to #1, 

Re: [DISCUSS] KIP-631: The Quorum-based Kafka Controller

2020-10-06 Thread Jun Rao
Hi, Colin,

Thanks for the reply. Made another pass of the KIP. A few more comments
below.

55. We discussed earlier why the current behavior where we favor the
current broker registration is better. Have you given this more thought?

80. Config related.
80.1 Currently, each broker only has the following 3 required configs. It
will be useful to document the required configs post KIP-500 (in both the
dedicated and shared controller mode).
broker.id
log.dirs
zookeeper.connect
80.2 It would be useful to document all deprecated configs post KIP-500.
For example, all zookeeper.* are obviously deprecated. But there could be
others. For example, since we don't plan to support auto broker id
generation, it seems broker.id.generation.enable is deprecated too.
80.3 Could we make it clear that controller.connect replaces quorum.voters
in KIP-595?
80.4 Could we document that broker.id is now optional?
80.5 The KIP suggests that controller.id is optional on the controller
node. I am concerned that this can cause a bit of confusion in 2 aspects.
First, in the dedicated controller mode, controller.id is not optional
(since broker.id is now optional). Second, in the shared controller mode,
it may not be easy for the user to figure out the default value of
controller.id to set controller.connect properly.
80.6 Regarding the consistency of config names, our metrics already include
controller. So, prefixing all controller related configs with "controller"
may be more consistent. If we choose to do that, could we rename all new
configs here and in KIP-595 consistently?

81. Metrics
81.1 kafka.controller:type=KafkaController,name=MetadataSnapshotLag: This
is now redundant since KIP-630 already has
kafka.controller:type=KafkaController,name=SnapshotLag.
81.2 Do we need both kafka.controller:type=KafkaServer,name=MetadataLag and
kafka.controller:type=KafkaController,name=MetadataLag since in the shared
controller mode, the metadata log is shared?

82. Metadata records
82.1 BrokerRecord: It needs to include supported features based on KIP-584.
82.2 FeatureLevel: The finalized feature in KIP-584 has the following
structure and needs to be reflected here.
{
   "version": 0, // int32 -> Represents the version of the schema for the
data stored in the ZK node
   "status": 1, // int32 -> Represents the status of the node
   "features": {
"group_coordinator": {   // string -> name of the feature
"min_version_level": 0, // int16 -> Represents the cluster-wide
finalized minimum version level (>=1) of this feature
"max_version_level": 3 // int16 -> Represents the cluster-wide
finalized maximum version level (>=1 and >= min_version_level) of this
feature
},
"consumer_offsets_topic_schema": {
"min_version_level": 0,
"max_version_level": 4
}
}

83. RPC
83.1 BrokerHeartbeatRequest: Could you document when the controller assigns
a new broker epoch?
83.2 BrokerHeartbeatResponse: Do we need a new error code to indicate a
broker is fenced?
83.3 What error/state indicates an unsuccessful controlled shutdown in
BrokerHeartbeatResponse? Currently, the controlled shutdown responses also
include a list of remaining partitions. That's mostly for debugging
purposes. Should we include it in BrokerHeartbeatResponse too?
83.4 Could we document when Listeners, Features and Rack are expected to be
set in BrokerHeartbeatRequest?
83.5 Currently, the metadata response only returns a list of brokers.
Should we have a separate field for the controller info or change the field
name to sth like nodes? Also, is the rack field relevant to the controller?

84. There are a few references to __kafka_metadata. However, KIP-595
uses __cluster_metadata.

85. "When a broker is fenced, it cannot process any client requests. "
Should we send a new error code in the response to indicate that the broker
is fenced?

86. "The controller can generate a new broker epoch by using the latest log
offset." Perhaps it's more accurate to say the new broker epoch is the
offset for the corresponding BrokerRecord in the metadata log.

87. meta.properties:
87.1 Could you document the format change to this file? Also, should we
bump up the version of the file?
87.2 How do we know a node is configured in KIP-500 mode?

88. CurMetadataOffset: Should we use nextMetadataOffset since it's more
consistent with fetchOffset and HWM?

Thanks,

Jun


On Mon, Oct 5, 2020 at 9:03 AM Colin McCabe  wrote:

> On Mon, Sep 28, 2020, at 11:41, Jun Rao wrote:
> > Hi, Colin,
> >
> > Thanks for the reply. A few more comments below.
> >
> > 62.
> > 62.1 controller.listener.names: So, is this used for the controller or
> the
> > broker trying to connect to the controller?
> >
>
> Hi Jun,
>
> It's used by both.  The broker tries to connect to controllers by using
> the first listener in this list.  The controller uses the list to determine
> which listeners it should bind to.
>
> >
> > 62.2 If we want to take the approach to 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-06 Thread Anna Povzner
Hi Bill,

Regarding KIP-612, only the first half of the KIP will get into 2.7
release: Broker-wide and per-listener connection rate limits, including
corresponding configs and metric (KAFKA-10023). I see that the table in the
release plan tags KAFKA-10023 as "old", not sure what it refers to. Note
that while KIP-612 was approved prior to 2.6 release, none of the
implementation went into 2.6 release.

The second half of the KIP that adds per-IP connection rate limiting will
need to be postponed (KAFKA-10024) till the following release.

Thanks,
Anna

On Tue, Oct 6, 2020 at 2:30 PM Bill Bejeck  wrote:

> Hi Kowshik,
>
> Given that the new feature is contained in the PR and the tooling is
> follow-on work (minor work, but that's part of the submitted PR), I think
> this is fine.
>
> Thanks,
> BIll
>
> On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam 
> wrote:
>
> > Hey Bill,
> >
> > For KIP-584 , we are in
> the
> > process of reviewing/merging the write path PR into AK trunk:
> > https://github.com/apache/kafka/pull/9001 . As far as the KIP goes, this
> > PR
> > is a major milestone. The PR merge will hopefully be done before EOD
> > tomorrow in time for the feature freeze. Beyond this PR, couple things
> are
> > left to be completed for this KIP: (1) tooling support and (2)
> implementing
> > support for feature version deprecation in the broker . In particular,
> (1)
> > is important for this KIP and the code changes are external to the broker
> > (since it is a separate tool we intend to build). As of now, we won't be
> > able to merge the tooling changes before feature freeze date. Would it be
> > ok to merge the tooling changes before code freeze on 10/22? The tooling
> > requirements are explained here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> > 
> >
> >
> %3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
> >
> > I would love to hear thoughts from Boyang and Jun as well.
> >
> >
> > Thanks,
> > Kowshik
> >
> >
> >
> > On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:
> >
> > > Hi John,
> > >
> > > I've updated the list of expected KIPs for 2.7.0 with KIP-478.
> > >
> > > Thanks,
> > > Bill
> > >
> > > On Mon, Oct 5, 2020 at 11:26 AM John Roesler 
> > wrote:
> > >
> > > > Hi Bill,
> > > >
> > > > Sorry about this, but I've just noticed that KIP-478 is
> > > > missing from the list. The url is:
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
> > > >
> > > > The KIP was accepted a long time ago, and the implementation
> > > > has been trickling in since 2.6 branch cut. However, most of
> > > > the public API implementation is done now, so I think at
> > > > this point, we can call it "released in 2.7.0". I'll make
> > > > sure it's done by feature freeze.
> > > >
> > > > Thanks,
> > > > -John
> > > >
> > > > On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> > > > > All,
> > > > >
> > > > > With the KIP acceptance deadline passing yesterday, I've updated
> the
> > > > > planned KIP content section of the 2.7.0 release plan
> > > > > <
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > > >
> > > > > .
> > > > >
> > > > > Removed proposed KIPs for 2.7.0 not getting approval
> > > > >
> > > > >1. KIP-653
> > > > ><
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> > > > >
> > > > >2. KIP-608
> > > > ><
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
> > > > >
> > > > >3. KIP-508
> > > > ><
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> > > > >
> > > > >
> > > > > KIPs added
> > > > >
> > > > >1. KIP-671
> > > > ><
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> > > > >
> > > > >
> > > > >
> > > > > Please let me know if I've missed anything.
> > > > >
> > > > > Thanks,
> > > > > Bill
> > > > >
> > > > > On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck 
> > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > Just a reminder that the KIP freeze is next Wednesday, September
> > > 30th.
> > > > > > Any KIP aiming to go in the 2.7.0 release needs to be accepted by
> > > this
> > > > date.
> > > > > >
> > > > > > Thanks,
> > > > > > BIll
> > > > > >
> > > > > > On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck 
> > > > wrote:
> > > > > >
> > > > > > > Boyan,
> > > > > > >
> > > > > > > Done. Thanks for the heads up.
> > > > > > >
> > > > > > > -Bill
> > > > > > >
> > > > > > > On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen <
> > > > reluctanthero...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hey Bill,
> > > > > > 

Re: [VOTE] KIP-584: Versioning scheme for features

2020-10-06 Thread Kowshik Prakasam
Hi Jun,

I have added the following details in the KIP-584 write up:

1. Deployment, IBP deprecation and avoidance of double rolls. This section
talks about the various phases of work that would be required to use this
KIP to eventually avoid Broker double rolls in the cluster (whenever IBP
values are advanced). Link to section:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Deployment,IBPdeprecationandavoidanceofdoublerolls
.

2. Feature version deprecation. This section explains the idea for feature
version deprecation (using highest supported feature min version) which you
had proposed during code review:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
.

Please let me know if you have any questions.


Cheers,
Kowshik


On Tue, Sep 29, 2020 at 11:07 AM Jun Rao  wrote:

> Hi, Kowshik,
>
> Thanks for the update. Regarding enabling a single rolling restart in the
> future, could we sketch out a bit how this will work by treating IBP as a
> feature? For example, IBP currently uses the release version and this KIP
> uses an integer for versions. How do we bridge the gap between the two?
> Does min.version still make sense for IBP as a feature?
>
> Thanks,
>
> Jun
>
> On Fri, Sep 25, 2020 at 5:57 PM Kowshik Prakasam 
> wrote:
>
> > Hi Colin,
> >
> > Thanks for the feedback. Those are very good points. I have made the
> > following changes to the KIP as you had suggested:
> > 1. Included the `timeoutMs` field in the `UpdateFeaturesRequest` schema.
> > The initial implementation won't be making use of the field, but we can
> > always use it in the future as the need arises.
> > 2. Modified the `FinalizedFeaturesEpoch` field in `ApiVersionsResponse`
> to
> > use int64. This is to avoid overflow problems in the future once ZK is
> > gone.
> >
> > I have also incorporated these changes into the versioning write path PR
> > that is currently under review:
> https://github.com/apache/kafka/pull/9001.
> >
> >
> > Cheers,
> > Kowshik
> >
> >
> >
> > On Fri, Sep 25, 2020 at 4:57 PM Kowshik Prakasam  >
> > wrote:
> >
> > > Hi Jun,
> > >
> > > Thanks for the feedback. It's a very good point. I have now modified
> the
> > > KIP-584 write-up "goals" section a bit. It now mentions one of the
> goals
> > as
> > > enabling rolling upgrades using a single restart (instead of 2). Also I
> > > have removed the text explicitly aiming for deprecation of IBP. Note
> that
> > > previously under "Potential features in Kafka" the IBP was mentioned
> > under
> > > point (4) as a possible coarse-grained feature. Hopefully, now the 2
> > > sections of the KIP align with each other well.
> > >
> > >
> > > Cheers,
> > > Kowshik
> > >
> > >
> > > On Fri, Sep 25, 2020 at 2:03 PM Colin McCabe 
> wrote:
> > >
> > >> On Tue, Sep 22, 2020, at 00:43, Kowshik Prakasam wrote:
> > >> > Hi all,
> > >> >
> > >> > I wanted to let you know that I have made the following changes to
> the
> > >> > KIP-584 write up. The purpose is to ensure the design is correct
> for a
> > >> few
> > >> > things which came up during implementation:
> > >> >
> > >>
> > >> Hi Kowshik,
> > >>
> > >> Thanks for the updates.
> > >>
> > >> >
> > >> > 1. Per FeatureUpdate error code: The UPDATE_FEATURES controller API
> is
> > >> no
> > >> > longer transactional. Going forward, we allow for individual
> > >> FeatureUpdate
> > >> > to succeed/fail in the request. As a result, the response schema now
> > >> > contains an error code per FeatureUpdate as well as a top-level
> error
> > >> code.
> > >> > Overall this is a better design because it better represents the
> > nature
> > >> of
> > >> > the API: each FeatureUpdate in the request is independent of the
> other
> > >> > updates, and the controller can process/apply these independently to
> > ZK.
> > >> > When an UPDATE_FEATURES request fails, this new design provides
> better
> > >> > clarity to the caller on which FeatureUpdate could not be applied
> (via
> > >> the
> > >> > individual error codes). In the previous design, we were unable to
> > >> achieve
> > >> > such an increased level of clarity in communicating the error codes.
> > >> >
> > >>
> > >> OK
> > >>
> > >> >
> > >> > 2. Due to #1, there were some minor changes required to the proposed
> > >> Admin
> > >> > APIs (describeFeatures and updateFeatures). A few unnecessary public
> > >> APIs
> > >> > have been removed, and couple essential ones have been added. The
> > latest
> > >> > changes now represent the latest design.
> > >> >
> > >> > 3. The timeoutMs field has been removed from the the UPDATE_FEATURES
> > API
> > >> > request, since it was not found to be required during
> implementation.
> > >> >
> > >>
> > >> Please don't get rid of timeoutMs.  timeoutMs is required if you want
> to
> > >> implement the ability to timeout the call if the controller can't get
> > to it
> > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #149

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Refactor unit tests around RocksDBConfigSetter (#9358)

[github] KAFKA-10527; Voters should not reinitialize as leader in same epoch 
(#9348)

[github] KAFKA-10338; Support PEM format for SSL key and trust stores (KIP-651) 
(#9345)

[github] KAFKA-10188: Prevent SinkTask::preCommit from being called after 
SinkTask::stop (#8910)

[github] KAFKA-9929: fix: add missing default implementations (#9321)


--
[...truncated 3.38 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #118

2020-10-06 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-06 Thread Bill Bejeck
Hi Kowshik,

Given that the new feature is contained in the PR and the tooling is
follow-on work (minor work, but that's part of the submitted PR), I think
this is fine.

Thanks,
BIll

On Tue, Oct 6, 2020 at 5:00 PM Kowshik Prakasam 
wrote:

> Hey Bill,
>
> For KIP-584 , we are in the
> process of reviewing/merging the write path PR into AK trunk:
> https://github.com/apache/kafka/pull/9001 . As far as the KIP goes, this
> PR
> is a major milestone. The PR merge will hopefully be done before EOD
> tomorrow in time for the feature freeze. Beyond this PR, couple things are
> left to be completed for this KIP: (1) tooling support and (2) implementing
> support for feature version deprecation in the broker . In particular, (1)
> is important for this KIP and the code changes are external to the broker
> (since it is a separate tool we intend to build). As of now, we won't be
> able to merge the tooling changes before feature freeze date. Would it be
> ok to merge the tooling changes before code freeze on 10/22? The tooling
> requirements are explained here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584
> 
>
> %3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport
>
> I would love to hear thoughts from Boyang and Jun as well.
>
>
> Thanks,
> Kowshik
>
>
>
> On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:
>
> > Hi John,
> >
> > I've updated the list of expected KIPs for 2.7.0 with KIP-478.
> >
> > Thanks,
> > Bill
> >
> > On Mon, Oct 5, 2020 at 11:26 AM John Roesler 
> wrote:
> >
> > > Hi Bill,
> > >
> > > Sorry about this, but I've just noticed that KIP-478 is
> > > missing from the list. The url is:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
> > >
> > > The KIP was accepted a long time ago, and the implementation
> > > has been trickling in since 2.6 branch cut. However, most of
> > > the public API implementation is done now, so I think at
> > > this point, we can call it "released in 2.7.0". I'll make
> > > sure it's done by feature freeze.
> > >
> > > Thanks,
> > > -John
> > >
> > > On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> > > > All,
> > > >
> > > > With the KIP acceptance deadline passing yesterday, I've updated the
> > > > planned KIP content section of the 2.7.0 release plan
> > > > <
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > >
> > > > .
> > > >
> > > > Removed proposed KIPs for 2.7.0 not getting approval
> > > >
> > > >1. KIP-653
> > > ><
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> > > >
> > > >2. KIP-608
> > > ><
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
> > > >
> > > >3. KIP-508
> > > ><
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> > > >
> > > >
> > > > KIPs added
> > > >
> > > >1. KIP-671
> > > ><
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> > > >
> > > >
> > > >
> > > > Please let me know if I've missed anything.
> > > >
> > > > Thanks,
> > > > Bill
> > > >
> > > > On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck 
> wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > Just a reminder that the KIP freeze is next Wednesday, September
> > 30th.
> > > > > Any KIP aiming to go in the 2.7.0 release needs to be accepted by
> > this
> > > date.
> > > > >
> > > > > Thanks,
> > > > > BIll
> > > > >
> > > > > On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck 
> > > wrote:
> > > > >
> > > > > > Boyan,
> > > > > >
> > > > > > Done. Thanks for the heads up.
> > > > > >
> > > > > > -Bill
> > > > > >
> > > > > > On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen <
> > > reluctanthero...@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hey Bill,
> > > > > > >
> > > > > > > unfortunately KIP-590 will not be in 2.7 release, could you
> move
> > > it to
> > > > > > > postponed KIPs?
> > > > > > >
> > > > > > > Best,
> > > > > > > Boyang
> > > > > > >
> > > > > > > On Thu, Sep 10, 2020 at 2:41 PM Bill Bejeck  >
> > > wrote:
> > > > > > >
> > > > > > > > Hi Gary,
> > > > > > > >
> > > > > > > > It's been added.
> > > > > > > >
> > > > > > > > Regards,
> > > > > > > > Bill
> > > > > > > >
> > > > > > > > On Thu, Sep 10, 2020 at 4:14 PM Gary Russell <
> > > gruss...@vmware.com>
> > > > > > > wrote:
> > > > > > > > > Can someone add a link to the release plan page [1] to the
> > > Future
> > > > > > > > Releases
> > > > > > > > > page [2]?
> > > > > > > > >
> > > > > > > > > I have the latter bookmarked.
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > [1]:
> > > > > > > > >
> > > > > > >
> > >
> >
> 

Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-10-06 Thread Kowshik Prakasam
Hey Bill,

For KIP-584 , we are in the
process of reviewing/merging the write path PR into AK trunk:
https://github.com/apache/kafka/pull/9001 . As far as the KIP goes, this PR
is a major milestone. The PR merge will hopefully be done before EOD
tomorrow in time for the feature freeze. Beyond this PR, couple things are
left to be completed for this KIP: (1) tooling support and (2) implementing
support for feature version deprecation in the broker . In particular, (1)
is important for this KIP and the code changes are external to the broker
(since it is a separate tool we intend to build). As of now, we won't be
able to merge the tooling changes before feature freeze date. Would it be
ok to merge the tooling changes before code freeze on 10/22? The tooling
requirements are explained here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-584

%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Toolingsupport

I would love to hear thoughts from Boyang and Jun as well.


Thanks,
Kowshik



On Mon, Oct 5, 2020 at 3:29 PM Bill Bejeck  wrote:

> Hi John,
>
> I've updated the list of expected KIPs for 2.7.0 with KIP-478.
>
> Thanks,
> Bill
>
> On Mon, Oct 5, 2020 at 11:26 AM John Roesler  wrote:
>
> > Hi Bill,
> >
> > Sorry about this, but I've just noticed that KIP-478 is
> > missing from the list. The url is:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-478+-+Strongly+typed+Processor+API
> >
> > The KIP was accepted a long time ago, and the implementation
> > has been trickling in since 2.6 branch cut. However, most of
> > the public API implementation is done now, so I think at
> > this point, we can call it "released in 2.7.0". I'll make
> > sure it's done by feature freeze.
> >
> > Thanks,
> > -John
> >
> > On Thu, 2020-10-01 at 13:49 -0400, Bill Bejeck wrote:
> > > All,
> > >
> > > With the KIP acceptance deadline passing yesterday, I've updated the
> > > planned KIP content section of the 2.7.0 release plan
> > > <
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > >
> > > .
> > >
> > > Removed proposed KIPs for 2.7.0 not getting approval
> > >
> > >1. KIP-653
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> > >
> > >2. KIP-608
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-608+-+Expose+Kafka+Metrics+in+Authorizer
> > >
> > >3. KIP-508
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-508%3A+Make+Suppression+State+Queriable
> > >
> > >
> > > KIPs added
> > >
> > >1. KIP-671
> > ><
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-671%3A+Introduce+Kafka+Streams+Specific+Uncaught+Exception+Handler
> > >
> > >
> > >
> > > Please let me know if I've missed anything.
> > >
> > > Thanks,
> > > Bill
> > >
> > > On Thu, Sep 24, 2020 at 1:47 PM Bill Bejeck  wrote:
> > >
> > > > Hi All,
> > > >
> > > > Just a reminder that the KIP freeze is next Wednesday, September
> 30th.
> > > > Any KIP aiming to go in the 2.7.0 release needs to be accepted by
> this
> > date.
> > > >
> > > > Thanks,
> > > > BIll
> > > >
> > > > On Tue, Sep 22, 2020 at 12:11 PM Bill Bejeck 
> > wrote:
> > > >
> > > > > Boyan,
> > > > >
> > > > > Done. Thanks for the heads up.
> > > > >
> > > > > -Bill
> > > > >
> > > > > On Mon, Sep 21, 2020 at 6:36 PM Boyang Chen <
> > reluctanthero...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hey Bill,
> > > > > >
> > > > > > unfortunately KIP-590 will not be in 2.7 release, could you move
> > it to
> > > > > > postponed KIPs?
> > > > > >
> > > > > > Best,
> > > > > > Boyang
> > > > > >
> > > > > > On Thu, Sep 10, 2020 at 2:41 PM Bill Bejeck 
> > wrote:
> > > > > >
> > > > > > > Hi Gary,
> > > > > > >
> > > > > > > It's been added.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Bill
> > > > > > >
> > > > > > > On Thu, Sep 10, 2020 at 4:14 PM Gary Russell <
> > gruss...@vmware.com>
> > > > > > wrote:
> > > > > > > > Can someone add a link to the release plan page [1] to the
> > Future
> > > > > > > Releases
> > > > > > > > page [2]?
> > > > > > > >
> > > > > > > > I have the latter bookmarked.
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > [1]:
> > > > > > > >
> > > > > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=158872629
> > > > > > > > [2]:
> > > > > > >
> > https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
> > > > > > > > 
> > > > > > > > From: Bill Bejeck 
> > > > > > > > Sent: Wednesday, September 9, 2020 4:35 PM
> > > > > > > > To: dev 
> > > > > > > > Subject: Re: [DISCUSS] Apache Kafka 2.7.0 release
> > > > > > > >
> > > > > > > > Hi Dongjin,
> > > > > > > >
> > > > > > > > I've moved both KIPs to the release plan.
> > > > > > > >
> > > > > > > > Keep in mind the cutoff for KIP acceptance is 

[jira] [Created] (KAFKA-10580) Add topic ID support to Fetch request

2020-10-06 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-10580:
--

 Summary: Add topic ID support to Fetch request
 Key: KAFKA-10580
 URL: https://issues.apache.org/jira/browse/KAFKA-10580
 Project: Kafka
  Issue Type: Sub-task
Reporter: Justine Olshan


Prevent fetching a stale topic with topic IDs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #116

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Refactor unit tests around RocksDBConfigSetter (#9358)

[github] KAFKA-10527; Voters should not reinitialize as leader in same epoch 
(#9348)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore STARTED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldWorkWithInMemoryStore PASSED

org.apache.kafka.streams.test.wordcount.WindowedWordCountProcessorTest > 
shouldFailWithLogging STARTED


[jira] [Created] (KAFKA-10579) Flaky test connect.integration.InternalTopicsIntegrationTest.testStartWhenInternalTopicsCreatedManuallyWithCompactForBrokersDefaultCleanupPolicy

2020-10-06 Thread Sophie Blee-Goldman (Jira)
Sophie Blee-Goldman created KAFKA-10579:
---

 Summary: Flaky test 
connect.integration.InternalTopicsIntegrationTest.testStartWhenInternalTopicsCreatedManuallyWithCompactForBrokersDefaultCleanupPolicy
 Key: KAFKA-10579
 URL: https://issues.apache.org/jira/browse/KAFKA-10579
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Sophie Blee-Goldman


 

{{java.lang.NullPointerException
at 
java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
at org.reflections.Store.getAllIncluding(Store.java:82)
at org.reflections.Store.getAll(Store.java:93)
at org.reflections.Reflections.getSubTypesOf(Reflections.java:404)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.getPluginDesc(DelegatingClassLoader.java:355)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanPluginPath(DelegatingClassLoader.java:340)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.scanUrlsAndAddPlugins(DelegatingClassLoader.java:268)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initPluginLoader(DelegatingClassLoader.java:216)
at 
org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.initLoaders(DelegatingClassLoader.java:209)
at 
org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:61)
at 
org.apache.kafka.connect.cli.ConnectDistributed.startConnect(ConnectDistributed.java:91)
at 
org.apache.kafka.connect.util.clusters.WorkerHandle.start(WorkerHandle.java:50)
at 
org.apache.kafka.connect.util.clusters.EmbeddedConnectCluster.addWorker(EmbeddedConnectCluster.java:167)
at 
org.apache.kafka.connect.integration.InternalTopicsIntegrationTest.testStartWhenInternalTopicsCreatedManuallyWithCompactForBrokersDefaultCleanupPolicy(InternalTopicsIntegrationTest.java:260)}}

{{}}

https://github.com/apache/kafka/pull/9280/checks?check_run_id=1214776222{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10338) Support PEM format for SSL certificates and private key

2020-10-06 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-10338.

Fix Version/s: 2.7.0
 Reviewer: Manikumar
   Resolution: Fixed

> Support PEM format for SSL certificates and private key
> ---
>
> Key: KAFKA-10338
> URL: https://issues.apache.org/jira/browse/KAFKA-10338
> Project: Kafka
>  Issue Type: New Feature
>  Components: security
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.7.0
>
>
> We currently support only file-based JKS/PKCS12 format for SSL key stores and 
> trust stores. It will be good to add support for PEM as configuration values 
> that fits better with config externalization.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #28

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[Konstantine Karantasis] MINOR: Annotate test BlockingConnectorTest as 
integration test (#9379)


--
[...truncated 3.15 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #117

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Annotate test BlockingConnectorTest as integration test (#9379)

[github] KAFKA-6733: Printing additional ConsumerRecord fields in 
DefaultMessageFormatter (#9099)


--
[...truncated 3.36 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldDeleteAndReturnPlainValue PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutAllWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > 
shouldPutIfAbsentWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.KeyValueStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldStoreAndReturnStateStores PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureOutputRecordsUsingTo PASSED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
STARTED

org.apache.kafka.streams.MockProcessorContextTest > shouldCaptureOutputRecords 
PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
fullConstructorShouldSetAllExpectedAttributes PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldCaptureCommitsAndAllowReset PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildName PASSED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex STARTED

org.apache.kafka.streams.MockProcessorContextTest > 
shouldThrowIfForwardedWithDeprecatedChildIndex PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #115

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Annotate test BlockingConnectorTest as integration test (#9379)

[github] KAFKA-6733: Printing additional ConsumerRecord fields in 
DefaultMessageFormatter (#9099)


--
[...truncated 3.34 MB...]
org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValue STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValue PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateTopicWithNullTopicName PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnIsOpen 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldReturnName 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

> Task :streams:upgrade-system-tests-0100:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0100:test
> Task :streams:upgrade-system-tests-0101:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0101:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0101:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0101:checkstyleMain NO-SOURCE
> Task 

[jira] [Resolved] (KAFKA-10527) Voters should always initialize as followers

2020-10-06 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10527.
-
Resolution: Fixed

> Voters should always initialize as followers
> 
>
> Key: KAFKA-10527
> URL: https://issues.apache.org/jira/browse/KAFKA-10527
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> The current state initialization logic preserves whatever state the broker 
> was in when it was shutdown. In particular, if the node was previously a 
> leader, it will remain a leader. This can be dangerous if we want to consider 
> optimizations such as in KAFKA-10526 since the leader might lose unflushed 
> data following the restart. It would be safer to always initialize as a 
> follower so that a leader's tenure never crosses process restarts. This helps 
> to guarantee the uniqueness of the (offset, epoch) tuple which the 
> replication protocol depends on.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-653: Upgrade log4j to log4j2

2020-10-06 Thread David Jacot
Thanks for driving this, Dongjin!

The KIP looks good to me. I’m +1 (non-binding).

Best,
David

Le mar. 6 oct. 2020 à 17:23, Dongjin Lee  a écrit :

> As of present:
>
> - Binding: +2 (Gwen, John)
> - Non-binding: 0
>
> Thanks,
> Dongjin
>
> On Sat, Oct 3, 2020 at 10:51 AM John Roesler  wrote:
>
> > Thanks for the KIP, Dongjin!
> >
> > I’ve just reviewed the KIP document, and it looks good to me.
> >
> > I’m +1 (binding)
> >
> > Thanks,
> > John
> >
> > On Fri, Oct 2, 2020, at 19:11, Gwen Shapira wrote:
> > > +1 (binding)
> > >
> > > A very welcome update :)
> > >
> > > On Tue, Sep 22, 2020 at 9:09 AM Dongjin Lee 
> wrote:
> > > >
> > > > Hi devs,
> > > >
> > > > Here I open the vote for KIP-653: Upgrade log4j to log4j2. It
> replaces
> > the
> > > > obsolete log4j logging library into the current standard, log4j2,
> with
> > > > maintaining backward-compatibility.
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > > --
> > > > *Dongjin Lee*
> > > >
> > > > *A hitchhiker in the mathematical world.*
> > > >
> > > >
> > > >
> > > >
> > > > *github:  github.com/dongjinleekr
> > > > keybase:
> > https://keybase.io/dongjinleekr
> > > > linkedin:
> > kr.linkedin.com/in/dongjinleekr
> > > > speakerdeck:
> > speakerdeck.com/dongjin
> > > > *
> > >
> > >
> > >
> > > --
> > > Gwen Shapira
> > > Engineering Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>
>
> --
> *Dongjin Lee*
>
> *A hitchhiker in the mathematical world.*
>
>
>
>
> *github:  github.com/dongjinleekr
> keybase: https://keybase.io/dongjinleekr
> linkedin: kr.linkedin.com/in/dongjinleekr
> speakerdeck:
> speakerdeck.com/dongjin
> *
>


Re: [VOTE] KIP-653: Upgrade log4j to log4j2

2020-10-06 Thread Dongjin Lee
As of present:

- Binding: +2 (Gwen, John)
- Non-binding: 0

Thanks,
Dongjin

On Sat, Oct 3, 2020 at 10:51 AM John Roesler  wrote:

> Thanks for the KIP, Dongjin!
>
> I’ve just reviewed the KIP document, and it looks good to me.
>
> I’m +1 (binding)
>
> Thanks,
> John
>
> On Fri, Oct 2, 2020, at 19:11, Gwen Shapira wrote:
> > +1 (binding)
> >
> > A very welcome update :)
> >
> > On Tue, Sep 22, 2020 at 9:09 AM Dongjin Lee  wrote:
> > >
> > > Hi devs,
> > >
> > > Here I open the vote for KIP-653: Upgrade log4j to log4j2. It replaces
> the
> > > obsolete log4j logging library into the current standard, log4j2, with
> > > maintaining backward-compatibility.
> > >
> > > Thanks,
> > > Dongjin
> > >
> > > --
> > > *Dongjin Lee*
> > >
> > > *A hitchhiker in the mathematical world.*
> > >
> > >
> > >
> > >
> > > *github:  github.com/dongjinleekr
> > > keybase:
> https://keybase.io/dongjinleekr
> > > linkedin:
> kr.linkedin.com/in/dongjinleekr
> > > speakerdeck:
> speakerdeck.com/dongjin
> > > *
> >
> >
> >
> > --
> > Gwen Shapira
> > Engineering Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


-- 
*Dongjin Lee*

*A hitchhiker in the mathematical world.*




*github:  github.com/dongjinleekr
keybase: https://keybase.io/dongjinleekr
linkedin: kr.linkedin.com/in/dongjinleekr
speakerdeck: speakerdeck.com/dongjin
*


Re: [DISCUSS] KIP-508: Make Suppression State Queriable - rebooted.

2020-10-06 Thread Dongjin Lee
You mean, the performance issue related to `#all` or `#range` query. Right?
I reviewed the second approach (i.e., extending `ValueGetter`), and this
approach is worth trying. Since KIP-508 was dropped from 2.7.0 release, we
have enough time now.

Let me have a try. I think we can have a rough one by this weekend.

Regards,
Dongjin

On Thu, Oct 1, 2020 at 4:52 AM John Roesler  wrote:

> Thanks Dongjin,
>
> It typically is nicer to be able to see usage examples, so
> I'd certainly be in favor if you're willing to add it to the
> KIP.
>
> I'm wondering if it's possible to implement the whole
> ReadOnlyKeyValueStore interface as proposed, if we really go
> ahead and just internally query into the suppression buffer
> as well as using the upstream ValueGetter. The reason is
> twofold:
> 1. The suppression buffer is ordered by arrival time, not by
> key. There is a by-key index, but it is also not ordered the
> same way that in-memory stores are ordered. Thus, we'd have
> a hard time implementing key-based range scans.
> 2. The internal ValueGetter interface only supports get-by-
> key lookups, so it would also need to be expanded to support
> range scans on the parent table.
>
> Neither of these problems are insurmountable, but I'm
> wondering if we _want_ to surmount them right now. Or should
> we instead just throw an UnsupportedOperationException on
> any API call that's inconvenient to implement right now?
> Then, we could get incremental value by first supporting
> (eg) key-based lookups and adding scans later.
>
> Or does this mean that our design so far is invalid, and we
> should really just make people provision a separate
> Materialized downstream? To pull this off, we'd need to
> first address KIP-300's challenges, though.
>
> I'm honestly not sure what the right call is here.
>
> Thanks,
> -John
>
> On Thu, 2020-10-01 at 01:50 +0900, Dongjin Lee wrote:
> > > It seems like it must be a ReadOnlyKeyValueStore. Does that sound
> right?
> >
> > Yes, it is. Would it be better to add a detailed description of how this
> > feature effects interactive query, with examples?
> >
> > Best,
> > Dongjin
> >
> > On Tue, Sep 29, 2020 at 10:31 AM John Roesler 
> wrote:
> >
> > > Hi Dongjin,
> > >
> > > Thanks! Sorry, I missed your prior message. The proposed API looks
> good to
> > > me.
> > >
> > > I’m wondering if we should specify what kind of store view would be
> > > returned when querying the operation result. It seems like it must be a
> > > ReadOnlyKeyValueStore. Does that sound right?
> > >
> > > Thanks!
> > > John
> > >
> > > On Mon, Sep 28, 2020, at 10:06, Dongjin Lee wrote:
> > > > Hi John,
> > > >
> > > > I updated the KIP with the discussion above. The 'Public Interfaces'
> > > > section describes the new API, and the 'Rejected Alternatives'
> section
> > > > describes the reasoning about why we selected this API design and
> > > rejected
> > > > the other alternatives.
> > > >
> > > > Please have a look when you are free. And please note that the KIP
> freeze
> > > > for 2.7.0 is imminent.
> > > >
> > > > Thanks,
> > > > Dongjin
> > > >
> > > > On Mon, Sep 21, 2020 at 11:35 PM Dongjin Lee 
> wrote:
> > > >
> > > > > Hi John,
> > > > >
> > > > > I updated the PR applying the API changes we discussed above. I am
> now
> > > > > updating the KIP document.
> > > > >
> > > > > Thanks,
> > > > > Dongjin
> > > > >
> > > > > On Sat, Sep 19, 2020 at 10:42 AM John Roesler  >
> > > wrote:
> > > > > > Hi Dongjin,
> > > > > >
> > > > > > Yes, that’s right. My the time of KIP-307, we had no choice but
> to
> > > add a
> > > > > > second name. But we do have a choice with Suppress.
> > > > > >
> > > > > > Thanks!
> > > > > > -John
> > > > > >
> > > > > > On Thu, Sep 17, 2020, at 13:14, Dongjin Lee wrote:
> > > > > > > Hi John,
> > > > > > >
> > > > > > > I just reviewed KIP-307. As far as I understood, ...
> > > > > > >
> > > > > > > 1. There was Materialized name initially.
> > > > > > > 2. With KIP-307, Named Operations were added.
> > > > > > > 3. Now we have two options for materializing suppression. If
> we take
> > > > > > > Materialized name here, we have two names for the same
> operation,
> > > which
> > > > > > is
> > > > > > > not feasible.
> > > > > > >
> > > > > > > Do I understand correctly?
> > > > > > >
> > > > > > > > Do you have a use case in mind for having two separate names
> for
> > > the
> > > > > > > operation and the view?
> > > > > > >
> > > > > > > No. I am now entirely convinced with your suggestion.
> > > > > > >
> > > > > > > I just started to update the draft implementation. If I
> understand
> > > > > > > correctly, please notify me; I will update the KIP by adding
> the
> > > > > > discussion
> > > > > > > above.
> > > > > > >
> > > > > > > Best,
> > > > > > > Dongjin
> > > > > > >
> > > > > > > On Thu, Sep 17, 2020 at 11:06 AM John Roesler <
> vvcep...@apache.org>
> > > > > > wrote:
> > > > > > > > Hi Dongjin,
> > > > > > > >
> > > > > > > > Thanks for the reply. Yes, that’s 

KIP-675: Convert KTable to a KStream using the previous value

2020-10-06 Thread Javier Freire Riobo
Hi all,

I'd like to propose these changes to the Kafka Streams API.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-675%3A+Convert+KTable+to+a+KStream+using+the+previous+value

This is a proposal to convert a KTable to a KStream knowing the previous
value of the registry.

I also opened a proof-of-concept PR:

PR#9321:  https://github.com/apache/kafka/pull/9381

What do you think?

Cheers,
Javier Freire


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #147

2020-10-06 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10578) KIP-675: Convert KTable to a KStream using the previous value

2020-10-06 Thread Jira
Javier Freire Riobó created KAFKA-10578:
---

 Summary: KIP-675: Convert KTable to a KStream using the previous 
value
 Key: KAFKA-10578
 URL: https://issues.apache.org/jira/browse/KAFKA-10578
 Project: Kafka
  Issue Type: Wish
  Components: streams
Reporter: Javier Freire Riobó


Imagine that we have an entity for which we want to emit the difference between 
the current and the previous state. The simplest case would be that the entity 
was an integer number and you want to emit the subtraction between the current 
and previous values.

For example, for the input stream 4, 6, 3 the output 4 (4 - 0), 2 (6 - 4), -3 
(3 - 6) is expected.

The way to achieve this with kafka streams would be through an aggregate.

The main problem, apart from needing more code, is that if the same event is 
received twice at the same time and the commit time is not 0, the difference is 
deleted and nothing is emitted.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #116

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix failing test due to KAFKA-10556 PR (#9372)


--
[...truncated 3.36 MB...]

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldPutWindowStartTimestampWithUnknownTimestamp PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > 
shouldReturnIsPersistent PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardClose 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardFlush 
PASSED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
STARTED

org.apache.kafka.streams.internals.WindowStoreFacadeTest > shouldForwardInit 
PASSED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonUsedOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testEmptyTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testStartTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testNegativeAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
STARTED

org.apache.kafka.streams.TestTopicsTest > shouldNotAllowToCreateWithNullDriver 
PASSED

org.apache.kafka.streams.TestTopicsTest > testDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testValue STARTED

org.apache.kafka.streams.TestTopicsTest > testValue PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestampAutoAdvance PASSED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testOutputWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputTopicWithNullTopicName PASSED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde STARTED

org.apache.kafka.streams.TestTopicsTest > testWrongSerde PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMapWithNull PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingOutputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics STARTED

org.apache.kafka.streams.TestTopicsTest > testMultipleTopics PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueList PASSED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver STARTED

org.apache.kafka.streams.TestTopicsTest > 
shouldNotAllowToCreateOutputWithNullDriver PASSED

org.apache.kafka.streams.TestTopicsTest > testValueList STARTED

org.apache.kafka.streams.TestTopicsTest > testValueList PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordList PASSED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic STARTED

org.apache.kafka.streams.TestTopicsTest > testNonExistingInputTopic PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValuesToMap PASSED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList STARTED

org.apache.kafka.streams.TestTopicsTest > testRecordsToList PASSED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration STARTED

org.apache.kafka.streams.TestTopicsTest > testKeyValueListDuration PASSED

org.apache.kafka.streams.TestTopicsTest > testInputToString STARTED

org.apache.kafka.streams.TestTopicsTest > testInputToString PASSED

org.apache.kafka.streams.TestTopicsTest > testTimestamp STARTED

org.apache.kafka.streams.TestTopicsTest > testTimestamp PASSED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders STARTED

org.apache.kafka.streams.TestTopicsTest > testWithHeaders PASSED

org.apache.kafka.streams.TestTopicsTest 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #114

2020-10-06 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Fix failing test due to KAFKA-10556 PR (#9372)


--
[...truncated 3.33 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos 

Re: [DISCUSS] KIP-660: Pluggable ReplicaAssignor

2020-10-06 Thread Mickael Maison
Hi Efe,

Thanks for the feedback.
We also need to assign replicas when adding partitions to an existing
topic. This is why I choose to use a list of partition ids. Otherwise
we'd need the number of partitions and the starting partition id.

Let me know if you have more questions

On Tue, Oct 6, 2020 at 2:16 AM Efe Gencer  wrote:
>
> Hi Mickael,
>
> Thanks for the KIP!
> A call to an external system, e.g. Cruise Control, in the implementation of 
> the provided interface can indeed help with the initial assignment of 
> partitions.
>
> I am curious why the proposed `ReplicaAssignor#assignReplicasToBrokers` 
> receives a list of partition ids as opposed to the number of partitions to 
> create the topic with?
>
> Would you clarify if this API is expected to be used (1) only for new topics 
> or (2) also for existing topics?
>
> Best,
> Efe
> 
> From: Mickael Maison 
> Sent: Thursday, October 1, 2020 9:43 AM
> To: dev 
> Subject: Re: [DISCUSS] KIP-660: Pluggable ReplicaAssignor
>
> Thanks Tom for the feedback!
>
> 1. If the data returned by the ReplicaAssignor implementation does not
> match that was requested, we'll also throw a ReplicaAssignorException
>
> 2. Good point, I'll update the KIP
>
> 3. The KIP mentions an error code associated with
> ReplicaAssignorException: REPLICA_ASSIGNOR_FAILED
>
> 4. (I'm naming your last question 4.) I spent some time looking at it.
> Initially I wanted to follow the model from the topic policies. But as
> you said, computing assignments for the whole batch may be more
> desirable and also avoids incrementally updating the cluster state.
> The logic in AdminManager is very much centered around doing 1 topic
> at a time but as far as I can tell we should be able to update it to
> compute assignments for the whole batch.
>
> I'll play a bit more with 4. and I'll update the KIP in the next few days
>
> On Mon, Sep 21, 2020 at 10:29 AM Tom Bentley  wrote:
> >
> > Hi Mickael,
> >
> > A few thoughts about the ReplicaAssignor contract:
> >
> > 1. What happens if a ReplicaAssignor impl returns a Map where some
> > assignments don't meet the given replication factor?
> > 2. Fixing the signature of assignReplicasToBrokers() as you have would make
> > it hard to pass extra information in the future (e.g. maybe someone comes
> > up with a use case where passing the clientId would be needed) because it
> > would require the interface be changed. If you factored all the parameters
> > into some new type then the signature could be
> > assignReplicasToBrokers(RequiredReplicaAssignment) and adding any new
> > properties to RequiredReplicaAssignment wouldn't break the contract.
> > 3. When an assignor throws RepliacAssignorException what error code will be
> > returned to the client?
> >
> > Also, this sentence got me thinking:
> >
> > > If multiple topics are present in the request, AdminManager will update
> > the Cluster object so the ReplicaAssignor class has access to the up to
> > date cluster metadata.
> >
> > Previously I've looked at how we can improve Kafka's pluggable policy
> > support to pass the more of the cluster state to policy implementations. A
> > similar problem exists there, but the more cluster state you pass the
> > harder it is to incrementally change it as you iterate through the topics
> > to be created/modified. This likely isn't a problem here and now, but it
> > could limit any future changes to the pluggable assignors. Did you consider
> > the alternative of the assignor just being passed a Set of assignments?
> > That means you can just pass the cluster state as it exists at the time. It
> > also gives the implementation more information to work with to find more
> > optimal assignments. For example, it could perform a bin packing type
> > assignment which found a better optimum for the whole collection of topics
> > than one which was only told about all the topics in the request
> > sequentially.
> >
> > Otherwise this looks like a valuable feature to me.
> >
> > Kind regards,
> >
> > Tom
> >
> >
> >
> >
> >
> > On Fri, Sep 11, 2020 at 6:19 PM Robert Barrett 
> > wrote:
> >
> > > Thanks Mickael, I think adding the new Exception resolves my concerns.
> > >
> > > On Thu, Sep 3, 2020 at 9:47 AM Mickael Maison 
> > > wrote:
> > >
> > > > Thanks Robert and Ryanne for the feedback.
> > > >
> > > > ReplicaAssignor implementations can throw an exception to indicate an
> > > > assignment can't be computed. This is already what the current round
> > > > robin assignor does. Unfortunately at the moment, there are no generic
> > > > error codes if it fails, it's either INVALID_PARTITIONS,
> > > > INVALID_REPLICATION_FACTOR or worse UNKNOWN_SERVER_ERROR.
> > > >
> > > > So I think it would be nice to introduce a new Exception/Error code to
> > > > cover any failures in the assignor and avoid UNKNOWN_SERVER_ERROR.
> > > >
> > > > I've updated the KIP accordingly, let me know if you have more 
> > > > questions.
> > > >
> > > > On Fri, 

Re: Kafka/Zookeeper

2020-10-06 Thread Tom Bentley
Hi Tom,

It's not that Kafka 2.4.0 _requires_ Zookeeper 3.5.7. Rather, previous
versions of Zookeeper suffered from security vulnerabilities and when Red
Hat were preparing our product based on Kafka 2.4.0 the decision was taken
to ship Zookeeper 3.5.7 to avoid those vulnerabilities. IIRC 3.5.7 wasn't
available at the time Apache Kafka 2.4.0 was released, was by the time we
were preparing the product and was subsequently picked up for Apache Kafka
2.4.1, to avoid those same vulnerabilities.

Kind regards,

Tom

On Tue, Oct 6, 2020 at 6:09 AM Scott, Thomas G 
wrote:

> I have a quick question regarding kafka 2.4.0 and it's compatibility with
> Zookeeper.  According to redhat, it looks like that perhaps kafka 2.4.0
> requires Zookeeper 3.5.7 on RHEL 7.  Could you please confirm, as it
> appears that according to the release notes, that Zookeeper 3.5.7 is not
> required until kafka 2.4.1.
>
>
> Thanks,
> Tom
>