Re: [DISCUSS] KIP-622 Add currentSystemTimeMs and currentStreamTimeMs to ProcessorContext

2020-08-13 Thread John Roesler
Thanks for the reply, Matthias,

I see what you mean. I suppose I was thinking that we would pass in the cached 
system time, which is also what we’re proposing to add to the ProcessorContext.

If you think there’s something about the timestamp extractor in particular that 
would make people want more precision, then something like Time would do the 
trick. Since it’s not a public API, maybe just ‘Supplier’ would be 
appropriate.

But I also don’t want to bikeshed it. My only concern was that it’s awkward to 
ask people to actually change their application code for testing. But maybe in 
this case, an option is better than no option, and if people don’t like it, we 
can always deprecate the mock extractor and evolve the interface later. 

So, I’m +1 either way.

Thanks,
John

On Mon, Aug 3, 2020, at 16:28, Matthias J. Sax wrote:
> Interesting proposal.
> 
> However, it raises the question how the runtime would pass in the
> `systemTime` parameter? To be accurate, we would need to call
> `Time.milliseconds()` each time before we call the timestamp extractor.
> This sound expensive and maybe the extractor does not even use this value.
> 
> Or we only call `Time.milliseconds()` periodically (as we also do in our
> runtime code) to make it cheap, however, we loose precision? Not sure if
> we can make this trade-off for the user?
> 
> Handing in the `Time` object itself might be another idea, however it
> seems "dangerous" though, as it does not seem to be actually public API?
> 
> Last, do we really think we need this feature? We never had a feature
> request for it and I am not aware of any issue with the current
> TimestampExtractor interface.
> 
> It's always easier to add it later if there is real demand instead of
> pro-actively changing it (and maybe the need to deprecate and remove
> later) with no clear benefit? Adding the `MockTimestampsExtractor` as
> part of the test-utils package seems less "dangerous" and should do the
> job, allowing us to collect feedback. If it's not good enough, we can
> still change the TimestampExtractor interface as a follow up?
> 
> 
> -Matthias
> 
> On 7/28/20 10:03 AM, John Roesler wrote:
> > Thanks Matthias,
> > 
> > This is a really good point. It might be a bummer
> > to have to actually change the topology between
> > testing and production. Do you think we can rather
> > evolve the TimestampExtractor interface to let
> > Streams pass in the current system time, along with
> > the current record and the current partition time?
> > 
> > For example, we could add a new method:
> > long extract(
> >   ConsumerRecord record, 
> >   long partitionTime,
> >   long systemTime
> > );
> > 
> > Then, Streams could pass in the current system 
> > time and TopologyTestDriver could pass the mocked
> > time. Additionally, users who implement
> > TimestampExtractor on their own would be able to
> > deterministically unit-test their own implementation.
> > 
> > It's always a challenge to add to an interface without
> > breaking compatibility. In this case, it seems like
> > we could provide a default implementation that just
> > ignores the systemTime argument and calls
> > extract(record,  partitionTime) and also deprecate
> > the existing method. Then custom implementations
> > would get a deprecation warning telling them to
> > implement the other method, and when we remove
> > the deprecated extract(record, partitionTime), we can
> > also drop the default implementation from the new
> > method.
> > 
> > Specifically, what do you think about:
> > =
> > public interface TimestampExtractor {
> > /*...
> >  * @deprecated Since 2.7 Implement
> >  *   {@code extract(ConsumerRecord record, long 
> > partitionTime, long systemTime)} instead
> >  */
> > @Deprecated
> > long extract(
> >   ConsumerRecord record,
> >   long partitionTime
> > );
> > 
> > default long extract(
> >   ConsumerRecord record,
> >   long partitionTime,
> >   long systemTime) {
> > return extract(record, partitionTime);
> > }
> > }
> > =
> > 
> > Thanks,
> > -John
> > 
> > On Sun, Jul 26, 2020, at 15:47, Matthias J. Sax wrote:
> >> Hi,
> >>
> >> I just had one more thought about an additional improvement we might
> >> want to include in this KIP.
> >>
> >> Kafka Streams ships with a `WallclockTimestampExtractor` that just
> >> returns `System.currentTimeMillis()` and thus, cannot be mocked. And it
> >> seems that there is no way for a user to build a custom timestamps
> >> extractor that returns the TTD's mocked system time.
> >>
> >> Thus, it might be nice, to add a `MockTimeExtractor` (only in the
> >> test-util package) that users could set and this `MockTimeExtractor`
> >> returns the mocked system time.
> >>
> >> Thoughts?
> >>
> >>
> >> -Matthias
> >>
> >> On 7/7/20 11:11 PM, Matthias J. Sax wrote:
> >>> I think, we don't need a default implementation for the new methods.
> >>>
> 

Re: [VOTE] KIP-616: Rename implicit Serdes instances in kafka-streams-scala

2020-08-13 Thread John Roesler
Hi Yuriy,

What a coincidence! I was just about to bump this thread myself. It would be 
really nice to get some more votes to push this over the line. 

Thanks for your patience!

-John

On Thu, Aug 13, 2020, at 23:45, Yuriy Badalyantc wrote:
> Hi all
> 
> Bumping this thread again
> 
> On Mon, Aug 10, 2020 at 10:07 AM William Reynolds <
> william.reyno...@instaclustr.com> wrote:
> 
> > Looks good,
> > +1 (non binding)
> >
> >
> > *William Reynolds**Technical Operations Engineer*
> >
> >
> >    
> > 
> >
> > Read our latest technical blog posts here
> > .
> >
> > This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> > and Instaclustr Inc (USA).
> >
> > This email and any attachments may contain confidential and legally
> > privileged information.  If you are not the intended recipient, do not copy
> > or disclose its content, but please reply to this email immediately and
> > highlight the error to the sender and then immediately delete the message.
> >
> > Instaclustr values your privacy. Our privacy policy can be found at
> > https://www.instaclustr.com/company/policies/privacy-policy
> >
> >
> > On Mon, 10 Aug 2020 at 13:01, Yuriy Badalyantc  wrote:
> >
> > > Hi everybody.
> > >
> > > Just bumping this thread. This is a pretty minor change only for the
> > Scala
> > > API and it's pending in the voting state for a while.
> > >
> > > -Yuriy
> > >
> > > On Fri, Aug 7, 2020 at 8:10 AM Yuriy Badalyantc 
> > wrote:
> > >
> > > > Hi everybody.
> > > >
> > > > There was some minor change since the voting process started (nullSerde
> > > > added). Let's continue to vote.
> > > >
> > > > -Yuriy.
> > > >
> > > > On Thu, Jul 9, 2020 at 10:00 PM John Roesler 
> > > wrote:
> > > >
> > > >> Thanks Yuriy,
> > > >>
> > > >> I'm +1 (binding)
> > > >>
> > > >> -John
> > > >>
> > > >> On Wed, Jul 8, 2020, at 23:08, Yuriy Badalyantc wrote:
> > > >> > Hi everybody
> > > >> >
> > > >> > I would like to start a vote  for KIP-616:
> > > >> >
> > > >>
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-616%3A+Rename+implicit+Serdes+instances+in+kafka-streams-scala
> > > >> >
> > > >> > This KIP fixes name clash in the
> > > org.apache.kafka.streams.scala.Serdes.
> > > >> >
> > > >> > -Yuriy
> > > >> >
> > > >>
> > > >
> > >
> >
>


Re: [VOTE] KIP-616: Rename implicit Serdes instances in kafka-streams-scala

2020-08-13 Thread Yuriy Badalyantc
Hi all

Bumping this thread again

On Mon, Aug 10, 2020 at 10:07 AM William Reynolds <
william.reyno...@instaclustr.com> wrote:

> Looks good,
> +1 (non binding)
>
>
> *William Reynolds**Technical Operations Engineer*
>
>
>    
> 
>
> Read our latest technical blog posts here
> .
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>
> Instaclustr values your privacy. Our privacy policy can be found at
> https://www.instaclustr.com/company/policies/privacy-policy
>
>
> On Mon, 10 Aug 2020 at 13:01, Yuriy Badalyantc  wrote:
>
> > Hi everybody.
> >
> > Just bumping this thread. This is a pretty minor change only for the
> Scala
> > API and it's pending in the voting state for a while.
> >
> > -Yuriy
> >
> > On Fri, Aug 7, 2020 at 8:10 AM Yuriy Badalyantc 
> wrote:
> >
> > > Hi everybody.
> > >
> > > There was some minor change since the voting process started (nullSerde
> > > added). Let's continue to vote.
> > >
> > > -Yuriy.
> > >
> > > On Thu, Jul 9, 2020 at 10:00 PM John Roesler 
> > wrote:
> > >
> > >> Thanks Yuriy,
> > >>
> > >> I'm +1 (binding)
> > >>
> > >> -John
> > >>
> > >> On Wed, Jul 8, 2020, at 23:08, Yuriy Badalyantc wrote:
> > >> > Hi everybody
> > >> >
> > >> > I would like to start a vote  for KIP-616:
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-616%3A+Rename+implicit+Serdes+instances+in+kafka-streams-scala
> > >> >
> > >> > This KIP fixes name clash in the
> > org.apache.kafka.streams.scala.Serdes.
> > >> >
> > >> > -Yuriy
> > >> >
> > >>
> > >
> >
>


Re: [DISCUSS] KIP-406: GlobalStreamThread should honor custom reset policy

2020-08-13 Thread John Roesler
Hi all,

It seems like the main motivation for this proposal is satisfied if we just 
implement some recovery mechanism instead of crashing. If the mechanism is 
going to be pausing all the threads until the state is recovered, then it still 
seems like a big enough behavior change to warrant a KIP still. 

I have to confess I’m a little unclear on why a custom reset policy for a 
global store, table, or even consumer might be considered wrong. It’s clearly 
wrong for the restore consumer, but the global consumer seems more semantically 
akin to the main consumer than the restore consumer. 

In other words, if it’s wrong to reset a GlobalKTable from latest, shouldn’t it 
also be wrong for a KTable, for exactly the same reason? It certainly seems 
like it would be an odd choice, but I’ve seen many choices I thought were odd 
turn out to have perfectly reasonable use cases. 

As far as the PAPI global store goes, I could see adding the option to 
configure it, since as Matthias pointed out, there’s really no specific 
semantics for the PAPI. But if automatic recovery is really all Navinder 
wanted, the I could also see deferring this until someone specifically wants it.

So the tl;dr is, if we just want to catch the exception and rebuild the store 
by seeking to earliest with no config or API changes, then I’m +1.

I’m wondering if we can improve on the “stop the world” effect of rebuilding 
the global store, though. It seems like we could put our heads together and 
come up with a more fine-grained approach to maintaining the right semantics 
during recovery while still making some progress.  

Thanks,
John


On Sun, Aug 9, 2020, at 02:04, Navinder Brar wrote:
> Hi Matthias,
> 
> IMHO, now as you explained using ‘global.consumer.auto.offset.reset’ is 
> not as straightforward 
> as it seems and it might change the existing behavior for users without 
> they releasing it, I also 
> 
> think that we should change the behavior inside global stream thread to 
> not die on 
> 
> InvalidOffsetException and instead clean and rebuild the state from the 
> earliest. On this, as you 
> 
> mentioned that we would need to pause the stream threads till the 
> global store is completely restored. 
> 
> Without it, there will be incorrect processing results if they are 
> utilizing a global store during processing. 
> 
> 
> 
> So, basically we can divide the use-cases into 4 parts.
>
>- PAPI based global stores (will have the earliest hardcoded)
>- PAPI based state stores (already has auto.reset.config)
>- DSL based GlobalKTables (will have earliest hardcoded)
>- DSL based KTables (will continue with auto.reset.config)
> 
> 
> 
> So, this would mean that we are not changing any existing behaviors 
> with this if I am right.
> 
> 
> 
> I guess we could improve the code to actually log a warning for this
> 
> case, similar to what we do for some configs already (cf
> 
> StreamsConfig#NON_CONFIGURABLE_CONSUMER_DEFAULT_CONFIGS).
> 
> >> I like this idea. In case we go ahead with the above approach and if we 
> >> can’t 
> 
> deprecate it, we should educate users that this config doesn’t work.
> 
> 
> 
> Looking forward to hearing thoughts from others as well.
>  
> 
> - NavinderOn Tuesday, 4 August, 2020, 05:07:59 am IST, Matthias J. 
> Sax  wrote:  
>  
>  Navinder,
> 
> thanks for updating the KIP. I think the motivation section is not
> totally accurate (what is not your fault though, as the history of how
> we handle this case is intertwined...) For example, "auto.offset.reset"
> is hard-coded for the global consumer to "none" and using
> "global.consumer.auto.offset.reset" has no effect (cf
> https://kafka.apache.org/25/documentation/streams/developer-guide/config-streams.html#default-values)
> 
> Also, we could not even really deprecate the config as mentioned in
> rejected alternatives sections, because we need `auto.offset.reset` for
> the main consumer -- and adding a prefix is independent of it. Also,
> because we ignore the config, it's is also deprecated/removed if you wish.
> 
> I guess we could improve the code to actually log a warning for this
> case, similar to what we do for some configs already (cf
> StreamsConfig#NON_CONFIGURABLE_CONSUMER_DEFAULT_CONFIGS).
> 
> 
> The other question is about compatibility with regard to default
> behavior: if we want to reintroduce `global.consumer.auto.offset.reset`
> this basically implies that we need to respect `auto.offset.reset`, too.
> Remember, that any config without prefix is applied to all clients that
> support this config. Thus, if a user does not limit the scope of the
> config to the main consumer (via `main.consumer.auto.offset.reset`) but
> uses the non-prefix versions and sets it to "latest" (and relies on the
> current behavior that `auto.offset.reset` is "none", and effectively
> "earliest" on the global consumer), the user might end up with a
> surprise as the global consumer behavior would switch from "earliest" to
> "latest" 

Jenkins build is back to normal : Kafka » kafka-trunk-jdk8 #11

2020-08-13 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-639 Move nodeLevelSensor and storeLevelSensor methods from StreamsMetricsImpl to StreamsMetrics

2020-08-13 Thread Mohamed Chebbi

KIP updated with the comments of Bruno Cardona.

Le 06/07/2020 à 22:36, Mohamed Chebbi a écrit :

Thank Bruno for your review.

Changes was added as you sugested.

Le 06/07/2020 à 14:57, Bruno Cadonna a écrit :

Hi Mohamed,

Thank you for the KIP.

Comments regarding the KIP wiki:

1. In section "Public Interface", you should state what you want to
change in interface StreamsMetrics. In your case, you want to add two
methods. You can find a good example how to describe this in KIP-444
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-444%3A+Augment+metrics+for+Kafka+Streams). 



2. In section "Compatibility, Deprecation, and Migration Plan" you
should state if anything is needed to keep backward compatibility.
Since you just want to add two methods to the interface, nothing is
needed. You should describe that under that section.

Regarding the KIP content, I left some comments on the corresponding
Jira ticket.

Best,
Bruno


On Sun, Jul 5, 2020 at 3:48 AM Mohamed Chebbi  
wrote:


https://cwiki.apache.org/confluence/display/KAFKA/KIP-639%3A+Move+nodeLevelSensor+and+storeLevelSensor+methods+from+StreamsMetricsImpl+to+StreamsMetrics 





Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #10

2020-08-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10386; Fix flexible version support for `records` type (#9163)


--
[...truncated 1.76 MB...]
kafka.admin.ReassignPartitionsIntegrationTest > testReplicaDirectoryMoves 
STARTED

kafka.admin.ReassignPartitionsIntegrationTest > testReplicaDirectoryMoves PASSED

kafka.admin.ReassignPartitionsUnitTest > 
testCurrentPartitionReplicaAssignmentToString STARTED

kafka.admin.ReassignPartitionsUnitTest > 
testCurrentPartitionReplicaAssignmentToString PASSED

kafka.admin.ReassignPartitionsUnitTest > testParseExecuteAssignmentArgs STARTED

kafka.admin.ReassignPartitionsUnitTest > testParseExecuteAssignmentArgs PASSED

kafka.admin.ReassignPartitionsUnitTest > testExecuteWithInvalidBrokerIdFails 
STARTED

kafka.admin.ReassignPartitionsUnitTest > testExecuteWithInvalidBrokerIdFails 
PASSED

kafka.admin.ReassignPartitionsUnitTest > testModifyBrokerThrottles STARTED

kafka.admin.ReassignPartitionsUnitTest > testModifyBrokerThrottles PASSED

kafka.admin.ReassignPartitionsUnitTest > testGetReplicaAssignments STARTED

kafka.admin.ReassignPartitionsUnitTest > testGetReplicaAssignments PASSED

kafka.admin.ReassignPartitionsUnitTest > testCompareTopicPartitionReplicas 
STARTED

kafka.admin.ReassignPartitionsUnitTest > testCompareTopicPartitionReplicas 
PASSED

kafka.admin.ReassignPartitionsUnitTest > testReplicaMoveStatesToString STARTED

kafka.admin.ReassignPartitionsUnitTest > testReplicaMoveStatesToString PASSED

kafka.admin.ReassignPartitionsUnitTest > testExecuteWithInvalidPartitionsFails 
STARTED

kafka.admin.ReassignPartitionsUnitTest > testExecuteWithInvalidPartitionsFails 
PASSED

kafka.admin.ReassignPartitionsUnitTest > testGenerateAssignmentWithFewerBrokers 
STARTED

kafka.admin.ReassignPartitionsUnitTest > testGenerateAssignmentWithFewerBrokers 
PASSED

kafka.admin.ReassignPartitionsUnitTest > testParseGenerateAssignmentArgs STARTED

kafka.admin.ReassignPartitionsUnitTest > testParseGenerateAssignmentArgs PASSED

kafka.admin.ReassignPartitionsUnitTest > testFindLogDirMoveStates STARTED

kafka.admin.ReassignPartitionsUnitTest > testFindLogDirMoveStates PASSED

kafka.admin.ReassignPartitionsUnitTest > testAlterReplicaLogDirs STARTED

kafka.admin.ReassignPartitionsUnitTest > testAlterReplicaLogDirs PASSED

kafka.admin.ReassignPartitionsUnitTest > testMoveMap STARTED

kafka.admin.ReassignPartitionsUnitTest > testMoveMap PASSED

kafka.admin.ReassignPartitionsUnitTest > testPartitionReassignStatesToString 
STARTED

kafka.admin.ReassignPartitionsUnitTest > testPartitionReassignStatesToString 
PASSED

kafka.admin.ReassignPartitionsUnitTest > testGetBrokerRackInformation STARTED

kafka.admin.ReassignPartitionsUnitTest > testGetBrokerRackInformation PASSED

kafka.admin.ReassignPartitionsUnitTest > testModifyTopicThrottles STARTED

kafka.admin.ReassignPartitionsUnitTest > testModifyTopicThrottles PASSED

kafka.admin.ReassignPartitionsUnitTest > testCurReassignmentsToString STARTED

kafka.admin.ReassignPartitionsUnitTest > testCurReassignmentsToString PASSED

kafka.admin.ReassignPartitionsUnitTest > 
testGenerateAssignmentWithInconsistentRacks STARTED

kafka.admin.ReassignPartitionsUnitTest > 
testGenerateAssignmentWithInconsistentRacks PASSED

kafka.admin.ReassignPartitionsUnitTest > testCompareTopicPartitions STARTED

kafka.admin.ReassignPartitionsUnitTest > testCompareTopicPartitions PASSED

kafka.admin.ReassignPartitionsUnitTest > testFindPartitionReassignmentStates 
STARTED

kafka.admin.ReassignPartitionsUnitTest > testFindPartitionReassignmentStates 
PASSED

kafka.admin.ReassignPartitionsUnitTest > 
testGenerateAssignmentFailsWithoutEnoughReplicas STARTED

kafka.admin.ReassignPartitionsUnitTest > 
testGenerateAssignmentFailsWithoutEnoughReplicas PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementAllServers PASSED

kafka.admin.AddPartitionsTest > testMissingPartition0 STARTED

kafka.admin.AddPartitionsTest > testMissingPartition0 PASSED

kafka.admin.AddPartitionsTest > testWrongReplicaCount STARTED

kafka.admin.AddPartitionsTest > testWrongReplicaCount PASSED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers STARTED

kafka.admin.AddPartitionsTest > testReplicaPlacementPartialServers PASSED

kafka.admin.AddPartitionsTest > testIncrementPartitions STARTED

kafka.admin.AddPartitionsTest > testIncrementPartitions PASSED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas STARTED

kafka.admin.AddPartitionsTest > testManualAssignmentOfReplicas PASSED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady STARTED

kafka.common.InterBrokerSendThreadTest > 
shouldCreateClientRequestAndSendWhenNodeIsReady PASSED

kafka.common.InterBrokerSendThreadTest > testFailingExpiredRequests STARTED


Re: [DISCUSS] KIP-405: Kafka Tiered Storage

2020-08-13 Thread Kowshik Prakasam
Hi Harsha/Satish,

Thanks for the great KIP. Below are the first set of questions/suggestions
I had after making a pass on the KIP.

5001. Under the section "Follower fetch protocol in detail", the
next-local-offset is the offset upto which the segments are copied to
remote storage. Instead, would last-tiered-offset be a better name than
next-local-offset? last-tiered-offset seems to naturally align well with
the definition provided in the KIP.

5002. After leadership is established for a partition, the leader would
begin uploading a segment to remote storage. If successful, the leader
would write the updated RemoteLogSegmentMetadata to the metadata topic (via
RLMM.putRemoteLogSegmentData). However, for defensive reasons, it seems
useful that before the first time the segment is uploaded by the leader for
a partition, the leader should ensure to catch up to all the metadata
events written so far in the metadata topic for that partition (ex: by
previous leader). To achieve this, the leader could start a lease (using an
establish_leader metadata event) before commencing tiering, and wait until
the event is read back. For example, this seems useful to avoid cases where
zombie leaders can be active for the same partition. This can also prove
useful to help avoid making decisions on which segments to be uploaded for
a partition, until the current leader has caught up to a complete view of
all segments uploaded for the partition so far (otherwise this may cause
same segment being uploaded twice -- once by the previous leader and then
by the new leader).

5003. There is a natural interleaving between uploading a segment to remote
store, and, writing a metadata event for the same (via
RLMM.putRemoteLogSegmentData). There can be cases where a remote segment is
uploaded, then the leader fails and a corresponding metadata event never
gets written. In such cases, the orphaned remote segment has to be
eventually deleted (since there is no confirmation of the upload). To
handle this, we could use 2 separate metadata events viz. copy_initiated
and copy_completed, so that copy_initiated events that don't have a
corresponding copy_completed event can be treated as garbage and deleted
from the remote object store by the broker.

5004. In the default implementation of RLMM (using the internal topic
__remote_log_metadata), a separate topic called
__remote_segments_to_be_deleted is going to be used just to track failures
in removing remote log segments. A separate topic (effectively another
metadata stream) introduces some maintenance overhead and design
complexity. It seems to me that the same can be achieved just by using just
the __remote_log_metadata topic with the following steps: 1) the leader
writes a delete_initiated metadata event, 2) the leader deletes the segment
and 3) the leader writes a delete_completed metadata event. Tiered segments
that have delete_initiated message and not delete_completed message, can be
considered to be a failure and retried.

5005. When a Kafka cluster is provisioned for the first time with KIP-405
tiered storage enabled, could you explain in the KIP about how the
bootstrap for __remote_log_metadata topic will be performed in the the
default RLMM implementation?

5006. I currently do not see details on the KIP on why RocksDB was chosen
as the default cache implementation, and how it is going to be used. Were
alternatives compared/considered? For example, it would be useful to
explain/evaulate the following: 1) debuggability of the RocksDB JNI
interface, 2) performance, 3) portability across platforms and 4) interface
parity of RocksDB’s JNI api with it's underlying C/C++ api.

5007. For the RocksDB cache (the default implementation of RLMM), what is
the relationship/mapping between the following: 1) # of tiered partitions,
2) # of partitions of metadata topic __remote_log_metadata and 3) # of
RocksDB instances? i.e. is the plan to have a RocksDB instance per tiered
partition, or per metadata topic partition, or just 1 for per broker?

5008. The system-wide configuration 'remote.log.storage.enable' is used to
enable tiered storage. Can this be made a topic-level configuration, so
that the user can enable/disable tiered storage at a topic level rather
than a system-wide default for an entire Kafka cluster?

5009. Whenever a topic with tiered storage enabled is deleted, the
underlying actions require the topic data to be deleted in local store as
well as remote store, and eventually the topic metadata needs to be deleted
too. What is the role of the controller in deleting a topic and it's
contents, while the topic has tiered storage enabled?

5010. RLMM APIs are currently synchronous, for example
RLMM.putRemoteLogSegmentData waits until the put operation is completed in
the remote metadata store. It may also block until the leader has caught up
to the metadata (not sure). Could we make these apis asynchronous (ex:
based on java.util.concurrent.Future) to provide room for tapping
performance 

[VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-13 Thread Boyang Chen
Hello everyone,

I would like to start a vote thread for KIP-657:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-657%3A+Add+Customized+Kafka+Streams+Logo

This KIP is aiming to add a new logo for the Kafka Streams library. And we
prepared two candidates with a cute otter. You could look up the KIP to
find those logos.


Please post your vote against these two customized logos. For example, I
would vote for *design-A (binding)*.

This vote thread shall be open for one week to collect enough votes to call
for a winner. Still, feel free to post any question you may have regarding
this KIP, thanks!


Re: [DISCUSS] Apache Kafka 2.7.0 release

2020-08-13 Thread Ismael Juma
Thanks for volunteering Bill. :)

Ismael

On Thu, Aug 13, 2020 at 3:13 PM Bill Bejeck  wrote:

> Hi All,
>
> I'd like to volunteer to be the release manager for our next feature
> release, 2.7. If there are no objections, I'll send out the release plan
> soon.
>
> Thanks,
> Bill Bejeck
>


[DISCUSS] Apache Kafka 2.7.0 release

2020-08-13 Thread Bill Bejeck
Hi All,

I'd like to volunteer to be the release manager for our next feature
release, 2.7. If there are no objections, I'll send out the release plan
soon.

Thanks,
Bill Bejeck


[jira] [Created] (KAFKA-10400) Add a customized Kafka Streams logo

2020-08-13 Thread Boyang Chen (Jira)
Boyang Chen created KAFKA-10400:
---

 Summary: Add a customized Kafka Streams logo
 Key: KAFKA-10400
 URL: https://issues.apache.org/jira/browse/KAFKA-10400
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Boyang Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


RE: [EXTERNAL] Re: Request for access to create KIP

2020-08-13 Thread Koushik Chitta
Accountid: koushikchitta

Cheers,
Koushik

-Original Message-
From: Boyang Chen  
Sent: Sunday, August 9, 2020 6:02 PM
To: dev 
Subject: [EXTERNAL] Re: Request for access to create KIP

Have you created the account already? What's your account id?

On Sat, Aug 8, 2020 at 4:36 PM Koushik Chitta 
wrote:

> Hi Team,
>
> Can you please grant me access to create KIP ?
>
> Thanks,
> Koushik
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #11

2020-08-13 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-10399) Producer and consumer clients could log IP addresses for brokers to ease debugging

2020-08-13 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-10399:


 Summary: Producer and consumer clients could log IP addresses for 
brokers to ease debugging
 Key: KAFKA-10399
 URL: https://issues.apache.org/jira/browse/KAFKA-10399
 Project: Kafka
  Issue Type: Improvement
  Components: consumer, producer 
Reporter: Lucas Bradstreet


Lag in DNS updates and resolution can cause connectivity problems in clients. 
client.dns.lookup = "use_all_dns_ips"
helps reduce the incidence of such issues, however it's still possible for DNS 
issues to cause real problems with clients.

The ZK client helpfully logs IP addresses with DNS addresses. We could do the 
same thing in the Kafka clients, e.g.
{noformat}
Group coordinator broker3.my.kafka.cluster.com/52.32.14.201:9092 (id: 3738382 
rack: null) is unavailable or invalid{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #10

2020-08-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10386; Fix flexible version support for `records` type (#9163)


--
[...truncated 3.20 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 

Re: Authentication Mechanisms for Kafka Connect Rest APIs

2020-08-13 Thread mandeep gandhi
More context - I am a noob here and want some help so that I can contribute
the auth bits.

Should I create a JIRA or/and a KIP?
If this is big enough for a KIP, please give me the permissions to create
one (username - ifconfig)

On Wed, Aug 12, 2020 at 1:24 PM mandeep gandhi 
wrote:

> Hi all,
>
> Currently, we support Basic Auth for Kafka Connect Rest APIs[0]. That is
> being supported through connect Rest Extension[1]. I was wondering if we
> had plans to extend the authentication mechanisms like SASL as in other
> places in the Kafka Project.
>
>
>
> [0] -
> https://docs.confluent.io/current/security/basic-auth.html#basic-auth-kconnect
> [1] -
> https://github.com/apache/kafka/blob/trunk/connect/basic-auth-extension/src/main/java/org/apache/kafka/connect/rest/basic/auth/extension/BasicAuthSecurityRestExtension.java
>
>
> Thanks,
>
> Mandeep Gandhi
>


[jira] [Resolved] (KAFKA-10386) Fix record serialization with flexible versions

2020-08-13 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-10386.
-
Resolution: Fixed

> Fix record serialization with flexible versions
> ---
>
> Key: KAFKA-10386
> URL: https://issues.apache.org/jira/browse/KAFKA-10386
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> The generated serde code for the "records" type uses a mix of compact and 
> non-compact length representations which leads to serialization errors. We 
> should update the generator logic to use the compact representation 
> consistently.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] ableegoldman commented on pull request #294: MINOR: removing docs version header from 2.6 docs

2020-08-13 Thread GitBox


ableegoldman commented on pull request #294:
URL: https://github.com/apache/kafka-site/pull/294#issuecomment-673585020


   Just tried in a different browser and I still only see the header on 2.5. 
The 2.4 formatting looks a little different, but no "You're viewing an older 
version..." header



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] ableegoldman commented on pull request #294: MINOR: removing docs version header from 2.6 docs

2020-08-13 Thread GitBox


ableegoldman commented on pull request #294:
URL: https://github.com/apache/kafka-site/pull/294#issuecomment-673584334


   Huh, I don't see the header on 2.4 (or earlier). Since this has been merged 
I now only see it on 2.5



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-10398) Intra-broker disk move failed with onPartitionFenced()

2020-08-13 Thread Ming Liu (Jira)
Ming Liu created KAFKA-10398:


 Summary: Intra-broker disk move failed with onPartitionFenced()
 Key: KAFKA-10398
 URL: https://issues.apache.org/jira/browse/KAFKA-10398
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: Ming Liu


When I tried the intra-broker disk move on 2.5.0, it always failed quickly in 
onPartitionFenced() failure. That is all the log for ReplicaAlterLogManager:
[2020-06-03 04:52:17,541] INFO [ReplicaAlterLogDirsManager on broker 5] Added 
fetcher to broker BrokerEndPoint(id=5, host=localhost:-1) for partitions 
Map(author_id_enrichment_changelog_staging-302 -> (offset=0, leaderEpoch=45)) 
(kafka.server.ReplicaAlterLogDirsManager)
[2020-06-03 04:52:17,546] INFO [ReplicaAlterLogDirsThread-5]: Starting 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,546] INFO [ReplicaAlterLogDirsThread-5]: Truncating 
partition author_id_enrichment_changelog_staging-302 to local high watermark 0 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,547] INFO [ReplicaAlterLogDirsThread-5]: 
Beginning/resuming copy of partition author_id_enrichment_changelog_staging-302 
from offset 0. Including this partition, there are 1 remaining partitions to 
copy by this thread. (kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,547] WARN [ReplicaAlterLogDirsThread-5]: Reset fetch 
offset for partition author_id_enrichment_changelog_staging-302 from 0 to 
current leader's start offset 1656927679 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,550] INFO [ReplicaAlterLogDirsThread-5]: Current offset 0 
for partition author_id_enrichment_changelog_staging-302 is out of range, which 
typically implies a leader change. Reset fetch offset to 1656927679 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,653] INFO [ReplicaAlterLogDirsThread-5]: Partition 
author_id_enrichment_changelog_staging-302 has an older epoch (45) than the 
current leader. Will await the new LeaderAndIsr state before resuming fetching. 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,653] WARN [ReplicaAlterLogDirsThread-5]: Partition 
author_id_enrichment_changelog_staging-302 marked as failed 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,657] INFO [ReplicaAlterLogDirsThread-5]: Shutting down 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,661] INFO [ReplicaAlterLogDirsThread-5]: Stopped 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 04:52:17,661] INFO [ReplicaAlterLogDirsThread-5]: Shutdown 
completed (kafka.server.ReplicaAlterLogDirsThread)
Only after restart the broker, the disk move succeed. The offset and epoch 
number looks better.
[2020-06-03 05:20:12,597] INFO [ReplicaAlterLogDirsManager on broker 5] Added 
fetcher to broker BrokerEndPoint(id=5, host=localhost:-1) for partitions 
Map(author_id_enrichment_changelog_staging-302 -> (offset=166346, 
leaderEpoch=47)) (kafka.server.ReplicaAlterLogDirsManager)
[2020-06-03 05:20:12,606] INFO [ReplicaAlterLogDirsThread-5]: Starting 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 05:20:12,618] INFO [ReplicaAlterLogDirsThread-5]: 
Beginning/resuming copy of partition author_id_enrichment_changelog_staging-302 
from offset 1657605964. Including this partition, there are 1 remaining 
partitions to copy by this thread. (kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 05:20:20,992] INFO [ReplicaAlterLogDirsThread-5]: Shutting down 
(kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 05:20:20,994] INFO [ReplicaAlterLogDirsThread-5]: Shutdown 
completed (kafka.server.ReplicaAlterLogDirsThread)
[2020-06-03 05:20:20,994] INFO [ReplicaAlterLogDirsThread-5]: Stopped 
(kafka.server.ReplicaAlterLogDirsThread)
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-10134) High CPU issue during rebalance in Kafka consumer after upgrading to 2.5

2020-08-13 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reopened KAFKA-10134:
-

> High CPU issue during rebalance in Kafka consumer after upgrading to 2.5
> 
>
> Key: KAFKA-10134
> URL: https://issues.apache.org/jira/browse/KAFKA-10134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.5.0
>Reporter: Sean Guo
>Assignee: Guozhang Wang
>Priority: Blocker
> Fix For: 2.6.0, 2.5.1
>
> Attachments: consumer5.log.2020-07-22.log
>
>
> We want to utilize the new rebalance protocol to mitigate the stop-the-world 
> effect during the rebalance as our tasks are long running task.
> But after the upgrade when we try to kill an instance to let rebalance happen 
> when there is some load(some are long running tasks >30S) there, the CPU will 
> go sky-high. It reads ~700% in our metrics so there should be several threads 
> are in a tight loop. We have several consumer threads consuming from 
> different partitions during the rebalance. This is reproducible in both the 
> new CooperativeStickyAssignor and old eager rebalance rebalance protocol. The 
> difference is that with old eager rebalance rebalance protocol used the high 
> CPU usage will dropped after the rebalance done. But when using cooperative 
> one, it seems the consumers threads are stuck on something and couldn't 
> finish the rebalance so the high CPU usage won't drop until we stopped our 
> load. Also a small load without long running task also won't cause continuous 
> high CPU usage as the rebalance can finish in that case.
>  
> "executor.kafka-consumer-executor-4" #124 daemon prio=5 os_prio=0 
> cpu=76853.07ms elapsed=841.16s tid=0x7fe11f044000 nid=0x1f4 runnable  
> [0x7fe119aab000]"executor.kafka-consumer-executor-4" #124 daemon prio=5 
> os_prio=0 cpu=76853.07ms elapsed=841.16s tid=0x7fe11f044000 nid=0x1f4 
> runnable  [0x7fe119aab000]   java.lang.Thread.State: RUNNABLE at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:467)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1275)
>  at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1241) 
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1216) 
> at
>  
> By debugging into the code we found it looks like the clients are  in a loop 
> on finding the coordinator.
> I also tried the old rebalance protocol for the new version the issue still 
> exists but the CPU will be back to normal when the rebalance is done.
> Also tried the same on the 2.4.1 which seems don't have this issue. So it 
> seems related something changed between 2.4.1 and 2.5.0.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] rhauch merged pull request #294: MINOR: removing docs version header from 2.6 docs

2020-08-13 Thread GitBox


rhauch merged pull request #294:
URL: https://github.com/apache/kafka-site/pull/294


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] rhauch commented on pull request #294: MINOR: removing docs version header from 2.6 docs

2020-08-13 Thread GitBox


rhauch commented on pull request #294:
URL: https://github.com/apache/kafka-site/pull/294#issuecomment-673499292


   @ableegoldman I see the header on 2.4 and earlier. IIUC, this is to *add* 
this header to the 2.5 docs and *remove* this header from all of the 2.6 docs 
(e.g., https://kafka.apache.org/26/documentation/streams/developer-guide/).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-10397) Do not Expose Statistics-based RocksDB Metrics If User Provides Statistics Object

2020-08-13 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-10397:
-

 Summary: Do not Expose Statistics-based RocksDB Metrics If User 
Provides Statistics Object
 Key: KAFKA-10397
 URL: https://issues.apache.org/jira/browse/KAFKA-10397
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 2.6.0
Reporter: Bruno Cadonna


Currently, when user provide a {{Statistics}} object to a RocksDB state store 
through the {{RocksDBConfigSetter}}, the statistics-based RocksDB metrics are 
not recorded. However, they are exposed.
 
It would be cleaner and more user-friendly if the statistics-based RocksDB 
metrics would not be exposed if users provided a {{Statistics}} object. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-635: GetOffsetShell: support for multiple topics and consumer configuration override

2020-08-13 Thread Dániel Urbán
Hi David,

Thank you for the suggestion. KIP-635 was referencing the --broker-list
issue, but based on your suggestion, I pinged the PR
https://github.com/apache/kafka/pull/8123.
Since I got no response, I updated KIP-635 to deprecate --broker-list. Will
update the PR related to KIP-635 to reflect that change.

Thanks,
Daniel

David Jacot  ezt írta (időpont: 2020. aug. 10., H,
20:48):

> Hi Daniel,
>
> I was not aware of that PR. At minimum, I would add `--bootstrap-server`
> to the list in the KIP for completeness. Regarding the implementation,
> I would leave a comment in that PR asking if they plan to continue it. If
> not,
> we could do it as part of your PR directly.
>
> Cheers,
> David
>
> On Mon, Aug 10, 2020 at 10:49 AM Dániel Urbán 
> wrote:
>
> > Hi everyone,
> >
> > Just a reminder, please vote if you are interested in this KIP being
> > implemented.
> >
> > Thanks,
> > Daniel
> >
> > Dániel Urbán  ezt írta (időpont: 2020. júl. 31.,
> P,
> > 9:01):
> >
> > > Hi David,
> > >
> > > There is another PR linked on KAFKA-8507, which is still open:
> > > https://github.com/apache/kafka/pull/8123
> > > Wasn't sure if it will go in, and wanted to avoid conflicts. Do you
> think
> > > I should do the switch to '--bootstrap-server' anyway?
> > >
> > > Thanks,
> > > Daniel
> > >
> > > David Jacot  ezt írta (időpont: 2020. júl. 30.,
> Cs,
> > > 17:52):
> > >
> > >> Hi Daniel,
> > >>
> > >> Thanks for the KIP.
> > >>
> > >> It seems that we have forgotten to include this tool in KIP-499.
> > >> KAFKA-8507
> > >> is resolved
> > >> by this tool still uses the deprecated "--broker-list". I suggest to
> > >> include "--bootstrap-server"
> > >> in your public interfaces as well and fix this omission during the
> > >> implementation.
> > >>
> > >> +1 (non-binding)
> > >>
> > >> Thanks,
> > >> David
> > >>
> > >> On Thu, Jul 30, 2020 at 1:52 PM Kamal Chandraprakash <
> > >> kamal.chandraprak...@gmail.com> wrote:
> > >>
> > >> > +1 (non-binding), thanks for the KIP!
> > >> >
> > >> > On Thu, Jul 30, 2020 at 3:31 PM Manikumar <
> manikumar.re...@gmail.com>
> > >> > wrote:
> > >> >
> > >> > > +1 (binding)
> > >> > >
> > >> > > Thanks for the KIP!
> > >> > >
> > >> > >
> > >> > >
> > >> > > On Thu, Jul 30, 2020 at 3:07 PM Dániel Urbán <
> urb.dani...@gmail.com
> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Hi everyone,
> > >> > > >
> > >> > > > If you are interested in this KIP, please do not forget to vote.
> > >> > > >
> > >> > > > Thanks,
> > >> > > > Daniel
> > >> > > >
> > >> > > > Viktor Somogyi-Vass  ezt írta
> (időpont:
> > >> 2020.
> > >> > > > júl.
> > >> > > > 28., K, 16:06):
> > >> > > >
> > >> > > > > +1 from me (non-binding), thanks for the KIP.
> > >> > > > >
> > >> > > > > On Mon, Jul 27, 2020 at 10:02 AM Dániel Urbán <
> > >> urb.dani...@gmail.com
> > >> > >
> > >> > > > > wrote:
> > >> > > > >
> > >> > > > > > Hello everyone,
> > >> > > > > >
> > >> > > > > > I'd like to start a vote on KIP-635. The KIP enhances the
> > >> > > > GetOffsetShell
> > >> > > > > > tool by enabling querying multiple topic-partitions, adding
> > new
> > >> > > > filtering
> > >> > > > > > options, and adding a config override option.
> > >> > > > > >
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-635%3A+GetOffsetShell%3A+support+for+multiple+topics+and+consumer+configuration+override
> > >> > > > > >
> > >> > > > > > The original discussion thread was named "[DISCUSS] KIP-308:
> > >> > > > > > GetOffsetShell: new KafkaConsumer API, support for multiple
> > >> topics,
> > >> > > > > > minimize the number of requests to server". The id had to be
> > >> > changed
> > >> > > as
> > >> > > > > > there was a collision, and the KIP also had to be renamed,
> as
> > >> some
> > >> > of
> > >> > > > its
> > >> > > > > > motivations were outdated.
> > >> > > > > >
> > >> > > > > > Thanks,
> > >> > > > > > Daniel
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> >
>


Build failed in Jenkins: Kafka » kafka-2.4-jdk8 #2

2020-08-13 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] MINOR: Ensure same version of scala library is used for compile 
and at runtime (#9168)


--
[...truncated 8.36 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = true] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldCloseProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFeedStoreFromGlobalKTable[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCleanUpPersistentStateStoresOnClose[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowPatternNotValidForTopicNameException[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldEnqueueLaterOutputsAfterEarlierOnes[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializersDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfEvenTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldInitProcessor[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowForUnknownTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnStreamsTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfInMemoryBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourcesThatMatchMultiplePattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldPopulateGlobalStore[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldThrowIfPersistentBuiltInStoreIsAccessedWithUntypedMethod[Eos enabled = 
false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldAllowPrePopulatingStatesStoresWithCachingEnabled[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #9

2020-08-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10391: Overwrite checkpoint in task corruption to remove 
corrupted partitions (#9170)


--
[...truncated 6.43 MB...]
org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig.unbounded 
should produce the correct buffer config PASSED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods STARTED

org.apache.kafka.streams.scala.kstream.SuppressedTest > BufferConfig should 
support very long chains of factory methods PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped should 
create a Grouped with Serdes PASSED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name STARTED

org.apache.kafka.streams.scala.kstream.GroupedTest > Create a Grouped with 
repartition topic name should create a Grouped with Serdes, and repartition 
topic name PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionWithNamedRepartitionTopic PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegionJava PASSED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion STARTED

org.apache.kafka.streams.scala.StreamToTableJoinScalaIntegrationTestImplicitSerdes
 > testShouldCountClicksPerRegion PASSED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned should create a Repartitioned with Serdes STARTED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned should create a Repartitioned with Serdes PASSED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with numPartitions should create a Repartitioned with Serdes and 
numPartitions STARTED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with numPartitions should create a Repartitioned with Serdes and 
numPartitions PASSED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with topicName should create a Repartitioned with Serdes and 
topicName STARTED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with topicName should create a Repartitioned with Serdes and 
topicName PASSED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with streamPartitioner should create a Repartitioned with Serdes, 
numPartitions, topicName and streamPartitioner STARTED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with streamPartitioner should create a Repartitioned with Serdes, 
numPartitions, topicName and streamPartitioner PASSED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with numPartitions, topicName, and streamPartitioner should 
create a Repartitioned with Serdes, numPartitions, topicName and 
streamPartitioner STARTED

org.apache.kafka.streams.scala.kstream.RepartitionedTest > Create a 
Repartitioned with numPartitions, topicName, and streamPartitioner should 
create a Repartitioned with Serdes, numPartitions, topicName and 
streamPartitioner PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filter a KStream should 
filter records satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > filterNot a KStream should 
filter records not satisfying the predicate PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > foreach a KStream should 
run foreach actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest > peek a KStream should run 
peek actions on records PASSED

org.apache.kafka.streams.scala.kstream.KStreamTest > selectKey a KStream should 
select a new key STARTED

org.apache.kafka.streams.scala.kstream.KStreamTest 

[jira] [Created] (KAFKA-10396) Overall memory of container keep on growing due to kafka stream / rocksdb and OOM killed once limit reached

2020-08-13 Thread Vagesh Mathapati (Jira)
Vagesh Mathapati created KAFKA-10396:


 Summary: Overall memory of container keep on growing due to kafka 
stream / rocksdb and OOM killed once limit reached
 Key: KAFKA-10396
 URL: https://issues.apache.org/jira/browse/KAFKA-10396
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect, streams
Affects Versions: 2.5.0, 2.3.1
Reporter: Vagesh Mathapati


We are observing that overall memory of our container keep on growing and never 
came down.
After analysis find out that rocksdbjni.so is keep on allocating 64M chunks of 
memory off-heap and never releases back. This causes OOM kill after memory 
reaches configured limit.

We use Kafka stream and globalktable for our many kafka topics.

Below is our environment
 * Kubernetes cluster
 * openjdk 11.0.7 2020-04-14 LTS
 * OpenJDK Runtime Environment Zulu11.39+16-SA (build 11.0.7+10-LTS)
 * OpenJDK 64-Bit Server VM Zulu11.39+16-SA (build 11.0.7+10-LTS, mixed mode)
 * Springboot 2.3
 * spring-kafka-2.5.0
 * kafka-streams-2.5.0
 * kafka-streams-avro-serde-5.4.0
 * rocksdbjni-5.18.3

Observed same result with kafka 2.3 version.

Below is the snippet of our analysis
from pmap output we took addresses from these 64M allocations (RSS)

Address Kbytes RSS Dirty Mode Mapping
7f3ce800 65536 65532 65532 rw--- [ anon ]
7f3cf400 65536 65536 65536 rw--- [ anon ]
7f3d6400 65536 65536 65536 rw--- [ anon ]

We tried to match with memory allocation logs enabled with the help of Azul 
systems team.

@ /tmp/librocksdbjni6564497922441568920.so:
_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741]
 - 0x7f3ce8ff7ca0
 @ /tmp/librocksdbjni6564497922441568920.so:
_ZN7rocksdb15BlockBasedTable3GetERKNS_11ReadOptionsERKNS_5SliceEPNS_10GetContextEPKNS_14SliceTransformEb+0x894)[0x7f3e1c898fd4]
 - 0x7f3ce8ff9780
 @ /tmp/librocksdbjni6564497922441568920.so:
_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da]
 - 0x7f3ce8ff9750
 @ /tmp/librocksdbjni6564497922441568920.so:
_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741]
 - 0x7f3ce8ff97c0
 @ 
/tmp/librocksdbjni6564497922441568920.so:_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da]
 - 0x7f3ce8ffccf0
 @ /tmp/librocksdbjni6564497922441568920.so:
_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741]
 - 0x7f3ce8ffcd10
 @ /tmp/librocksdbjni6564497922441568920.so:
_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0xfa)[0x7f3e1c65d5da]
 - 0x7f3ce8ffccf0
 @ /tmp/librocksdbjni6564497922441568920.so:
_Z18rocksdb_get_helperP7JNIEnv_PN7rocksdb2DBERKNS1_11ReadOptionsEPNS1_18ColumnFamilyHandleEP11_jbyteArrayii+0x261)[0x7f3e1c65d741]
 - 0x7f3ce8ffcd10


We also identified that content on this 64M is just 0s and no any data present 
in it.

I tried to tune the rocksDB configuratino as mentioned but it did not helped. 
[https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config]

 

Please let me know if you need any more details



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #10

2020-08-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10391: Overwrite checkpoint in task corruption to remove 
corrupted partitions (#9170)


--
[...truncated 3.22 MB...]
org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfKeyIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfValueIsEqualWithNullForCompareValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfTimestampIsDifferentForCompareValueTimestamp PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualWithNullForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullExpectedRecordForCompareValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldNotAllowNullProducerRecordForCompareKeyValue PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord 
PASSED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp PASSED


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #9

2020-08-13 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-10391: Overwrite checkpoint in task corruption to remove 
corrupted partitions (#9170)


--
[...truncated 3.19 MB...]
org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectPersistentStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldRespectTaskIdling[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSourceSpecificDeserializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldReturnAllStores[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotCreateStateDirectoryForStatelessTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnAllStoresNames[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers[Eos enabled = false] 
PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessConsumerRecordList[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUseSinkSpecificSerializers[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldFlushStoreForFirstInput[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessFromSourceThatMatchPattern[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureSinkTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldUpdateStoreForNewKey[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldSendRecordViaCorrectSourceTopicDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTime[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldSetRecordMetadata[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForLargerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest