Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #406

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12229: Restore original class loader in integration tests using 
EmbeddedConnectCluster during shutdown  (#9942)


--
[...truncated 3.55 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true 

Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #436

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12229: Restore original class loader in integration tests using 
EmbeddedConnectCluster during shutdown  (#9942)


--
[...truncated 3.58 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush() PASSED

KeyValueStoreFacadeTest > shouldForwardInit() 

[jira] [Created] (KAFKA-12238) Implement DescribeProducers API

2021-01-25 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12238:
---

 Summary: Implement DescribeProducers API
 Key: KAFKA-12238
 URL: https://issues.apache.org/jira/browse/KAFKA-12238
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Implement the changes described in KIP-664 for the `DescribeProducers` API. 
This is only the server-side implementation and not the changes to `Admin`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] scott-confluent commented on pull request #318: MINOR: updating infobip logo on powered-by page

2021-01-25 Thread GitBox


scott-confluent commented on pull request #318:
URL: https://github.com/apache/kafka-site/pull/318#issuecomment-767160417


   @ijuma @junrao @mjsax @vvcephei please take a look when you get a few. 
Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno commented on a change in pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-25 Thread GitBox


miguno commented on a change in pull request #324:
URL: https://github.com/apache/kafka-site/pull/324#discussion_r563562309



##
File path: 27/ops.html
##
@@ -553,7 +539,558 @@ 6.3 Kafka Configuration
+  6.3 Geo-Replication (Cross-Cluster Data 
Mirroring)
+
+  Geo-Replication 
Overview
+
+  
+Kafka administrators can define data flows that cross the boundaries of 
individual Kafka clusters, data centers, or geo-regions. Such event streaming 
setups are often needed for organizational, technical, or legal requirements. 
Common scenarios include:
+  
+
+  
+Geo-replication
+Disaster recovery
+Feeding edge clusters into a central, aggregate cluster
+Physical isolation of clusters (such as production vs. testing)
+Cloud migration or hybrid cloud deployments
+Legal and compliance requirements
+  
+
+  
+Administrators can set up such inter-cluster data flows with Kafka's 
MirrorMaker (version 2), a tool to replicate data between different Kafka 
environments in a streaming manner. MirrorMaker is built on top of the Kafka 
Connect framework and supports features such as:
+  
+
+  
+Replicates topics (data plus configurations)
+Replicates consumer groups including offsets to migrate applications 
between clusters
+Replicates ACLs
+Preserves partitioning
+Automatically detects new topics and partitions
+Provides a wide range of metrics, such as end-to-end replication 
latency across multiple data centers/clusters
+Fault-tolerant and horizontally scalable operations
+  
+
+  
+  Note: Geo-replication with MirrorMaker replicates data across Kafka 
clusters. This inter-cluster replication is different from Kafka's intra-cluster replication, which replicates data within 
the same Kafka cluster.
+  
+
+  What Are Replication 
Flows
+
+  
+With MirrorMaker, Kafka administrators can replicate topics, topic 
configurations, consumer groups and their offsets, and ACLs from one or more 
source Kafka clusters to one or more target Kafka clusters, i.e., across 
cluster environments. In a nutshell, MirrorMaker consumes data from the source 
cluster with source connectors, and then replicates the data by producing to 
the target cluster with sink connectors.
+  
+
+  
+These directional flows from source to target clusters are called 
replication flows. They are defined with the format 
{source_cluster}->{target_cluster} in the MirrorMaker 
configuration file as described later. Administrators can create complex 
replication topologies based on these flows.
+  
+
+  
+Here are some example patterns:
+  
+
+  
+Active/Active high availability deployments: A->B, 
B->A
+Active/Passive or Active/Standby high availability deployments: 
A->B
+Aggregation (e.g., from many clusters to one): A->K, B->K, 
C->K
+Fan-out (e.g., from one to many clusters): K->A, K->B, 
K->C
+Forwarding: A->B, B->C, C->D
+  
+
+  
+By default, a flow replicates all topics and consumer groups. However, 
each replication flow can be configured independently. For instance, you can 
define that only specific topics or consumer groups are replicated from the 
source cluster to the target cluster.
+  
+
+  
+Here is a first example on how to configure data replication from a 
primary cluster to a secondary cluster (an 
active/passive setup):
+  
+
+# Basic settings
+clusters = primary, secondary
+primary.bootstrap.servers = broker3-primary:9092
+secondary.bootstrap.servers = broker5-secondary:9092
+
+# Define replication flows
+primary->secondary.enable = true
+primary->secondary.topics = foobar-topic, quux-.*
+
+
+
+  Configuring 
Geo-Replication
+
+  
+The following sections describe how to configure and run a dedicated 
MirrorMaker cluster. If you want to run MirrorMaker within an existing Kafka 
Connect cluster or other supported deployment setups, please refer to https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0;>KIP-382:
 MirrorMaker 2.0 and be aware that the names of configuration settings may 
vary between deployment modes.
+  
+
+  
+Beyond what's covered in the following sections, further examples and 
information on configuration settings are available at:
+  
+
+  
+ https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java;>MirrorMakerConfig,
 https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java;>MirrorConnectorConfig
+ https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultTopicFilter.java;>DefaultTopicFilter
 for topics, https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultGroupFilter.java;>DefaultGroupFilter
 for consumer groups
+ Example configuration settings in 

[GitHub] [kafka-site] junrao commented on pull request #318: MINOR: updating infobip logo on powered-by page

2021-01-25 Thread GitBox


junrao commented on pull request #318:
URL: https://github.com/apache/kafka-site/pull/318#issuecomment-767175488


   @scott-confluent : Thanks for the PR. LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] junrao merged pull request #318: MINOR: updating infobip logo on powered-by page

2021-01-25 Thread GitBox


junrao merged pull request #318:
URL: https://github.com/apache/kafka-site/pull/318


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: How to build KAFKA 2.7 on z/OS platform

2021-01-25 Thread Luke Chen
Hi Rabast,
Gradle is just a build automation tool, and we still build the source code
using javac, so your question:
1. Can I use javac and compile the source code, and pack them with jar
manually
--> Yes

2. which sequence is recommended?
--> You might need to refer to the gradle build dependency to know the
sequence, or try to build on another machine with gradle installed, it'll
print out the build sequence.

3. Is a MAKE file available to rebuild the jar libs using make ?
--> No, I don't think so.

I'd suggest you that the easiest way is to build from other machine with
gradle installed,
and put the output jar file onto the z/OS if you really want to run on z/OS
environment.

Thanks.
Luke

On Sat, Jan 23, 2021 at 12:02 AM Rabast, Matthias
 wrote:

> How can I build Kafka from ksource afka-2.7.0-src.tar) on z/OS v2.3
> platform where gradle is NOT available?
>
> Can I use javac and compile the source code, and pack them with jar
> manually ? which sequence is recommended?
>
> Is a MAKE file available to rebuild the jar libs using make ?
>
>
>
> Thanks for any advice.
>


Re: [VOTE] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2021-01-25 Thread Sophie Blee-Goldman
Thanks for the KIP Bruno, +1 (binding)

Sophie

On Mon, Jan 25, 2021 at 11:23 AM Guozhang Wang  wrote:

> Hey Bruno,
>
> Thanks for your response!
>
> 1) Yup I'm good with option a) as well.
> 2) Thanks!
> 3) Sounds good to me. I think it would not change any StreamThread
> implementation regarding capturing exceptions from consumer.poll() since it
> captures StreamsException as fatal.
>
>
> Guozhang
>
> On Wed, Dec 16, 2020 at 4:43 AM Bruno Cadonna  wrote:
>
> > Hi Guozhang,
> >
> > Thank for the feedback!
> >
> > Please find my answers inline.
> >
> > Best,
> > Bruno
> >
> >
> > On 14.12.20 23:33, Guozhang Wang wrote:
> > > Hello Bruno,
> > >
> > > Just a few more questions about the KIP:
> > >
> > > 1) If the internal topics exist but the calculated num.partitions do
> not
> > > match the existing topics, what would Streams do;
> >
> > Good point! I missed to explicitly consider misconfigurations in the KIP.
> >
> > I propose to throw a fatal error in this case during manual and
> > automatic initialization. For the fatal error, we have two options:
> > a) introduce a second exception besides MissingInternalTopicException,
> > e.g. MisconfiguredInternalTopicException
> > b) rename MissingInternalTopicException to
> > MissingOrMisconfiguredInternalTopicException and throw that in both
> cases.
> >
> > Since the process to react on such an exception user-side should be
> > similar, I am fine with option b). However, IMO option a) is a bit
> > cleaner. WDYT?
> >
> > > 2) Since `init()` is a blocking call (we only return after all topics
> are
> > > confirmed to be created), should we have a timeout for this call as
> well
> > or
> > > not;
> >
> > I will add an overload with a timeout to the KIP.
> >
> > > 3) If the configure is set to `MANUAL_SETUP`, then during rebalance do
> we
> > > still check if number of partitions of the existing topic match or not;
> > if
> > > not, do we throw the newly added exception or throw a fatal
> > > StreamsException? Today we would throw the StreamsException from
> assign()
> > > which would be then thrown from consumer.poll() as a fatal error.
> > >
> >
> > Yes, I think we should check if the number of partitions match. I
> > propose to throw the newly added exception in the same way as we throw
> > now the MissingSourceTopicException, i.e., throw it from
> > consumer.poll(). WDYT?
> >
> > > Guozhang
> > >
> > >
> > > On Mon, Dec 14, 2020 at 12:47 PM John Roesler 
> > wrote:
> > >
> > >> Thanks, Bruno!
> > >>
> > >> I'm +1 (binding)
> > >>
> > >> -John
> > >>
> > >> On Mon, 2020-12-14 at 09:57 -0600, Leah Thomas wrote:
> > >>> Thanks for the KIP Bruno, LGTM. +1 (non-binding)
> > >>>
> > >>> Cheers,
> > >>> Leah
> > >>>
> > >>> On Mon, Dec 14, 2020 at 4:29 AM Bruno Cadonna 
> > >> wrote:
> > >>>
> >  Hi,
> > 
> >  I'd like to start the voting on KIP-698 that proposes an explicit
> user
> >  initialization of broker-side state for Kafka Streams instead of
> > >> letting
> >  Kafka Streams setting up the broker-side state automatically during
> >  rebalance. Such an explicit initialization avoids possible data loss
> >  issues due to automatic initialization.
> > 
> >  https://cwiki.apache.org/confluence/x/7CnZCQ
> > 
> >  Best,
> >  Bruno
> > 
> > >>
> > >>
> > >>
> > >
> >
>
>
> --
> -- Guozhang
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #455

2021-01-25 Thread Apache Jenkins Server
See 




[GitHub] [kafka-site] junrao merged pull request #318: MINOR: updating infobip logo on powered-by page

2021-01-25 Thread GitBox


junrao merged pull request #318:
URL: https://github.com/apache/kafka-site/pull/318


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] junrao commented on pull request #318: MINOR: updating infobip logo on powered-by page

2021-01-25 Thread GitBox


junrao commented on pull request #318:
URL: https://github.com/apache/kafka-site/pull/318#issuecomment-767175488


   @scott-confluent : Thanks for the PR. LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #435

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12190: Fix setting of file permissions on non-POSIX filesystems 
(#9947)


--
[...truncated 3.57 MB...]
TestTopicsTest > testEmptyTopic() STARTED

TestTopicsTest > testEmptyTopic() PASSED

TestTopicsTest > testStartTimestamp() STARTED

TestTopicsTest > testStartTimestamp() PASSED

TestTopicsTest > testNegativeAdvance() STARTED

TestTopicsTest > testNegativeAdvance() PASSED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateWithNullDriver() PASSED

TestTopicsTest > testDuration() STARTED

TestTopicsTest > testDuration() PASSED

TestTopicsTest > testOutputToString() STARTED

TestTopicsTest > testOutputToString() PASSED

TestTopicsTest > testValue() STARTED

TestTopicsTest > testValue() PASSED

TestTopicsTest > testTimestampAutoAdvance() STARTED

TestTopicsTest > testTimestampAutoAdvance() PASSED

TestTopicsTest > testOutputWrongSerde() STARTED

TestTopicsTest > testOutputWrongSerde() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputTopicWithNullTopicName() PASSED

TestTopicsTest > testWrongSerde() STARTED

TestTopicsTest > testWrongSerde() PASSED

TestTopicsTest > testKeyValuesToMapWithNull() STARTED

TestTopicsTest > testKeyValuesToMapWithNull() PASSED

TestTopicsTest > testNonExistingOutputTopic() STARTED

TestTopicsTest > testNonExistingOutputTopic() PASSED

TestTopicsTest > testMultipleTopics() STARTED

TestTopicsTest > testMultipleTopics() PASSED

TestTopicsTest > testKeyValueList() STARTED

TestTopicsTest > testKeyValueList() PASSED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() STARTED

TestTopicsTest > shouldNotAllowToCreateOutputWithNullDriver() PASSED

TestTopicsTest > testValueList() STARTED

TestTopicsTest > testValueList() PASSED

TestTopicsTest > testRecordList() STARTED

TestTopicsTest > testRecordList() PASSED

TestTopicsTest > testNonExistingInputTopic() STARTED

TestTopicsTest > testNonExistingInputTopic() PASSED

TestTopicsTest > testKeyValuesToMap() STARTED

TestTopicsTest > testKeyValuesToMap() PASSED

TestTopicsTest > testRecordsToList() STARTED

TestTopicsTest > testRecordsToList() PASSED

TestTopicsTest > testKeyValueListDuration() STARTED

TestTopicsTest > testKeyValueListDuration() PASSED

TestTopicsTest > testInputToString() STARTED

TestTopicsTest > testInputToString() PASSED

TestTopicsTest > testTimestamp() STARTED

TestTopicsTest > testTimestamp() PASSED

TestTopicsTest > testWithHeaders() STARTED

TestTopicsTest > testWithHeaders() PASSED

TestTopicsTest > testKeyValue() STARTED

TestTopicsTest > testKeyValue() PASSED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() STARTED

TestTopicsTest > shouldNotAllowToCreateTopicWithNullTopicName() PASSED

> Task :streams:upgrade-system-tests-0102:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0102:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0102:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:compileTestJava
> Task :streams:upgrade-system-tests-0102:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0102:testClasses
> Task :streams:upgrade-system-tests-0102:checkstyleTest
> Task :streams:upgrade-system-tests-0102:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0102:test
> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task :streams:upgrade-system-tests-0110:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:testClasses
> Task :streams:upgrade-system-tests-0110:checkstyleTest
> Task :streams:upgrade-system-tests-0110:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:test
> Task :streams:upgrade-system-tests-10:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-10:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-10:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:compileTestJava
> Task :streams:upgrade-system-tests-10:processTestResources NO-SOURCE
> Task :streams:upgrade-system-tests-10:testClasses
> Task :streams:upgrade-system-tests-10:checkstyleTest
> Task :streams:upgrade-system-tests-10:spotbugsMain NO-SOURCE
> Task :streams:upgrade-system-tests-10:test
> Task :streams:upgrade-system-tests-11:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-11:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-11:classes UP-TO-DATE
> Task 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #405

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12190: Fix setting of file permissions on non-POSIX filesystems 
(#9947)


--
[...truncated 3.56 MB...]
MockTimeTest > shouldNotAllowNegativeSleep() STARTED

MockTimeTest > shouldNotAllowNegativeSleep() PASSED

MockTimeTest > shouldAdvanceTimeOnSleep() STARTED

MockTimeTest > shouldAdvanceTimeOnSleep() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectPersistentStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldRespectTaskIdling() STARTED

TopologyTestDriverEosTest > shouldRespectTaskIdling() PASSED

TopologyTestDriverEosTest > shouldUseSourceSpecificDeserializers() STARTED

TopologyTestDriverEosTest > shouldUseSourceSpecificDeserializers() PASSED

TopologyTestDriverEosTest > shouldReturnAllStores() STARTED

TopologyTestDriverEosTest > shouldReturnAllStores() PASSED

TopologyTestDriverEosTest > shouldNotCreateStateDirectoryForStatelessTopology() 
STARTED

TopologyTestDriverEosTest > shouldNotCreateStateDirectoryForStatelessTopology() 
PASSED

TopologyTestDriverEosTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() STARTED

TopologyTestDriverEosTest > 
shouldApplyGlobalUpdatesCorrectlyInRecursiveTopologies() PASSED

TopologyTestDriverEosTest > shouldReturnAllStoresNames() STARTED

TopologyTestDriverEosTest > shouldReturnAllStoresNames() PASSED

TopologyTestDriverEosTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() STARTED

TopologyTestDriverEosTest > 
shouldPassRecordHeadersIntoSerializersAndDeserializers() PASSED

TopologyTestDriverEosTest > shouldProcessConsumerRecordList() STARTED

TopologyTestDriverEosTest > shouldProcessConsumerRecordList() PASSED

TopologyTestDriverEosTest > shouldUseSinkSpecificSerializers() STARTED

TopologyTestDriverEosTest > shouldUseSinkSpecificSerializers() PASSED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() STARTED

TopologyTestDriverEosTest > shouldFlushStoreForFirstInput() PASSED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() STARTED

TopologyTestDriverEosTest > shouldProcessFromSourceThatMatchPattern() PASSED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() STARTED

TopologyTestDriverEosTest > shouldCaptureSinkTopicNamesIfWrittenInto() PASSED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() STARTED

TopologyTestDriverEosTest > shouldUpdateStoreForNewKey() PASSED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
STARTED

TopologyTestDriverEosTest > shouldSendRecordViaCorrectSourceTopicDeprecated() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTime() PASSED

TopologyTestDriverEosTest > shouldSetRecordMetadata() STARTED

TopologyTestDriverEosTest > shouldSetRecordMetadata() PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForLargerValue() PASSED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() STARTED

TopologyTestDriverEosTest > shouldReturnCorrectInMemoryStoreTypeOnly() PASSED

TopologyTestDriverEosTest > shouldThrowForMissingTime() STARTED

TopologyTestDriverEosTest > shouldThrowForMissingTime() PASSED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
STARTED

TopologyTestDriverEosTest > shouldCaptureInternalTopicNamesIfWrittenInto() 
PASSED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() STARTED

TopologyTestDriverEosTest > shouldPunctuateOnWallClockTimeDeprecated() PASSED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() STARTED

TopologyTestDriverEosTest > shouldProcessRecordForTopic() PASSED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
STARTED

TopologyTestDriverEosTest > shouldForwardRecordsFromSubtopologyToSubtopology() 
PASSED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() STARTED

TopologyTestDriverEosTest > shouldNotUpdateStoreForSmallerValue() PASSED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
STARTED

TopologyTestDriverEosTest > shouldCreateStateDirectoryForStatefulTopology() 
PASSED

TopologyTestDriverEosTest > shouldNotRequireParameters() STARTED

TopologyTestDriverEosTest > shouldNotRequireParameters() PASSED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() STARTED

TopologyTestDriverEosTest > shouldPunctuateIfWallClockTimeAdvances() PASSED

> Task :streams:upgrade-system-tests-0110:compileJava NO-SOURCE
> Task :streams:upgrade-system-tests-0110:processResources NO-SOURCE
> Task :streams:upgrade-system-tests-0110:classes UP-TO-DATE
> Task :streams:upgrade-system-tests-0110:checkstyleMain NO-SOURCE
> Task :streams:upgrade-system-tests-0110:compileTestJava
> Task 

[GitHub] [kafka-site] scott-confluent commented on pull request #318: MINOR: updating infobip logo on powered-by page

2021-01-25 Thread GitBox


scott-confluent commented on pull request #318:
URL: https://github.com/apache/kafka-site/pull/318#issuecomment-767160417


   @ijuma @junrao @mjsax @vvcephei please take a look when you get a few. 
Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (KAFKA-12237) Support non-routable quorum voter addresses

2021-01-25 Thread Alok Nikhil (Jira)
Alok Nikhil created KAFKA-12237:
---

 Summary: Support non-routable quorum voter addresses
 Key: KAFKA-12237
 URL: https://issues.apache.org/jira/browse/KAFKA-12237
 Project: Kafka
  Issue Type: Sub-task
Reporter: Alok Nikhil
Assignee: Alok Nikhil


With KIP-595, we expect the RaftConfig to specify the quorum voter endpoints 
upfront on startup. In the general case, this works fine. However, for testing 
we need a more lazy approach that discovers the other voters in the quorum 
after startup (i.e. controller port bind). This approach also lends itself well 
to cases where we might have an observer that discovers the voter endpoints 
from, say a `DescribeQuorum` event.

 

Quoting [~hachikuji]:

 
{panel}
**At a high level, I think we should make {{0.0.0.0:0}} a valid address 
configuration from the perspective of {{KafkaRaftClient}}. We are telling the 
client that initialization of the endpoints will come at a later time and 
through some external mechanism. I think this is a generally useful capability 
even outside of testing. For example, we may eventually want to be able to 
initialize an observer from a {{bootstrap.servers}} configuration. In this 
case, discovery of the voter endpoints will come after we have (say) sent a 
{{DescribeQuorum}} to the cluster. We may also want to implement some of the 
more fancy bootstrapping options that etcd has, such as using an external 
service to facilitate address bootstrapping.
{panel}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #454

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove `toStruct` and `fromStruct` methods from generated 
protocol classes (#9960)


--
[...truncated 7.15 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3ac81d0d, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3ac81d0d, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6b87a7e1, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@6b87a7e1, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31d8630a, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@31d8630a, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d3003c7, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2d3003c7, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@35eb2610, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@35eb2610, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20429eaf, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20429eaf, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@71c41ff5, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@71c41ff5, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@530d277b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@530d277b, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@662cb2cf, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@662cb2cf, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@404c845b, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@404c845b, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5d560bb6, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5d560bb6, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7414f2de, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7414f2de, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@a6cc5a9, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@a6cc5a9, 
timestamped = false, caching = true, logging = true PASSED


Re: Requesting to be added as a contributor

2021-01-25 Thread Matthias J. Sax
Done.

On 1/25/21 8:03 AM, Sergio Pena Anaya wrote:
> Hi All,
> 
> I would like to be added as a Kafka contributor and get permissions for
> JIRA and Confluence.
> 
> My username is: spena
> 
> Thanks,
> - Sergio
> 


Re: Kafka Advisory Topic

2021-01-25 Thread Gwen Shapira
Agree that this sounds like a good idea.

Would be good to have a more formal proposal (i.e. a KIP) with the details.
I can think of about 100 different questions (will there be "levels"
like in logs, what type of events are in or out of scope, rate
limiting, data formats, etc).
I am also curious on whether the notifications are intended for
humans, automated processes or even the Kafka client applications
themselves. I hope the proposal can include a few example scenarios to
help us reason about the experience.

Knowlton, is this something you want to pick up?

Gwen

On Thu, Jan 21, 2021 at 6:05 AM Christopher Shannon
 wrote:
>
> Hi,
>
> I am on the ActiveMQ PMC and I think this is a very good idea to have a way
> to do advisories/notifications/events (whatever you want to call it). In
> ActiveMQ classic you have advisories and in Artemis you have notifications.
> Having management messages that can be subscribed to in real time is
> actually a major feature that is missing from Kafka that many other brokers
> have.
>
> The idea here would be to publish notifications of different configurable
> events when something important happens so a consumer can listen in on
> things it cares about and be able to do something instead of having to poll
> the admin API. There are many events that happen in a broker that would be
> useful to be notified about. Events such as new connections to the cluster,
> new topics created or destroyed, consumer group creation, authorization
> errors, new leader election, etc. The list is pretty much endless.
>
> The metadata topic that will exist is probably not going to have all of
> this information so some other mechanism would be needed to handle
> publishing these messages to a specific management topic that would be
> useful for a consumer.
>
> Chris
>
>
> On Wed, Jan 20, 2021 at 4:12 PM Boyang Chen 
> wrote:
>
> > Hey Knowles,
> >
> > in Kafka people normally use admin clients to get those metadata. I'm not
> > sure why you mentioned specifically that having a topic to manage these
> > information is useful, but a good news is that in KIP-500
> > <
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-500%3A+Replace+ZooKeeper+with+a+Self-Managed+Metadata+Quorum
> > >
> > we
> > are trying to deprecate Zookeeper and migrate to a self-managed metadata
> > topic quorum. At the time this feature is fully done, you should be able to
> > use consumers to pull the metadata log.
> >
> > Best,
> > Boyang
> >
> > On Wed, Jan 20, 2021 at 11:22 AM Knowles Atchison Jr <
> > katchiso...@gmail.com>
> > wrote:
> >
> > > Good afternoon all,
> > >
> > > In our Kafka clusters we have a need to know when certain activities are
> > > performed, mainly topics being created, but brokers coming up/down is
> > also
> > > useful. This would be akin to what ActiveMQ does via advisory messages (
> > > https://activemq.apache.org/advisory-message).
> > >
> > > Since there did not appear to be anything in the ecosystem currently, I
> > > wrote a standalone Java program that watches the various ZooKeeper
> > > locations that the Kafka broker writes to and deltas can tell us
> > > topic/broker actions etc... and writes to a kafka topic for downstream
> > > consumption.
> > >
> > > Ideally, we would rather have the broker handle this internally rather
> > > than yet another service stood up in our systems. I began digging through
> > > the broker source (my Scala is basically hello world level) and there
> > does
> > > not appear to be any mechanism in which this could be easily patched into
> > > the broker.
> > >
> > > Specifically, a producer or consumer acting upon an nonexistent topic or
> > a
> > > manual CreateTopic would trigger a Produce to this advisory topic and the
> > > KafkaApis framework would handle it like any other request. However, by
> > the
> > > time we are inside the getTopicMetadata call there doesn't seem to be a
> > > clean way to fire off another message that would make its way through
> > > KafkaApis. Perhaps another XManager type object is required?
> > >
> > > Looking for alternative ideas or guidance (or I missed something in the
> > > broker).
> > >
> > > Thank you.
> > >
> > > Knowles
> > >
> >



-- 
Gwen Shapira
Engineering Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: Contribute to Kafka

2021-01-25 Thread Mickael Maison
Hi Stefan,

I've added you to the contributor list. You should now be able to
assign JIRAs to yourself.
Tickets with the newbie tag (like in your query) are usually a good
start. Otherwise have a quick browse around the latest tickets and see
if there's anything that looks interesting to you

Thanks for your interest in Apache Kafka!

On Mon, Jan 25, 2021 at 5:33 PM Stefan Baychev  wrote:
>
> Hello,
>
> My name is Stefan Baychev. Am a Software Engineer with about 9 years of
> experience working within the Java ecosystem.
>
> I would like to contribute to the Kafka Open Source Project. Based on the
> ways to do this, taken from the web site it says: " Please contact us to be
> added to the contributor list with your JIRA account username provided in
> the email. ".
>
> Here are my Jira details:
>
> Username: twinkle_
> Email: sbayc...@gmail.com
> Full Name: Stefan Baychev
> Would like to pick anything that is good to start with from this list, if
> you have any advice, let me know. Else would pick some from here:
>
> https://issues.apache.org/jira/browse/KAFKA-10885?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open
>
>
> Best Regards,
>
> Stefan Baychev


Re: [VOTE] KIP-698: Add Explicit User Initialization of Broker-side State to Kafka Streams

2021-01-25 Thread Guozhang Wang
Hey Bruno,

Thanks for your response!

1) Yup I'm good with option a) as well.
2) Thanks!
3) Sounds good to me. I think it would not change any StreamThread
implementation regarding capturing exceptions from consumer.poll() since it
captures StreamsException as fatal.


Guozhang

On Wed, Dec 16, 2020 at 4:43 AM Bruno Cadonna  wrote:

> Hi Guozhang,
>
> Thank for the feedback!
>
> Please find my answers inline.
>
> Best,
> Bruno
>
>
> On 14.12.20 23:33, Guozhang Wang wrote:
> > Hello Bruno,
> >
> > Just a few more questions about the KIP:
> >
> > 1) If the internal topics exist but the calculated num.partitions do not
> > match the existing topics, what would Streams do;
>
> Good point! I missed to explicitly consider misconfigurations in the KIP.
>
> I propose to throw a fatal error in this case during manual and
> automatic initialization. For the fatal error, we have two options:
> a) introduce a second exception besides MissingInternalTopicException,
> e.g. MisconfiguredInternalTopicException
> b) rename MissingInternalTopicException to
> MissingOrMisconfiguredInternalTopicException and throw that in both cases.
>
> Since the process to react on such an exception user-side should be
> similar, I am fine with option b). However, IMO option a) is a bit
> cleaner. WDYT?
>
> > 2) Since `init()` is a blocking call (we only return after all topics are
> > confirmed to be created), should we have a timeout for this call as well
> or
> > not;
>
> I will add an overload with a timeout to the KIP.
>
> > 3) If the configure is set to `MANUAL_SETUP`, then during rebalance do we
> > still check if number of partitions of the existing topic match or not;
> if
> > not, do we throw the newly added exception or throw a fatal
> > StreamsException? Today we would throw the StreamsException from assign()
> > which would be then thrown from consumer.poll() as a fatal error.
> >
>
> Yes, I think we should check if the number of partitions match. I
> propose to throw the newly added exception in the same way as we throw
> now the MissingSourceTopicException, i.e., throw it from
> consumer.poll(). WDYT?
>
> > Guozhang
> >
> >
> > On Mon, Dec 14, 2020 at 12:47 PM John Roesler 
> wrote:
> >
> >> Thanks, Bruno!
> >>
> >> I'm +1 (binding)
> >>
> >> -John
> >>
> >> On Mon, 2020-12-14 at 09:57 -0600, Leah Thomas wrote:
> >>> Thanks for the KIP Bruno, LGTM. +1 (non-binding)
> >>>
> >>> Cheers,
> >>> Leah
> >>>
> >>> On Mon, Dec 14, 2020 at 4:29 AM Bruno Cadonna 
> >> wrote:
> >>>
>  Hi,
> 
>  I'd like to start the voting on KIP-698 that proposes an explicit user
>  initialization of broker-side state for Kafka Streams instead of
> >> letting
>  Kafka Streams setting up the broker-side state automatically during
>  rebalance. Such an explicit initialization avoids possible data loss
>  issues due to automatic initialization.
> 
>  https://cwiki.apache.org/confluence/x/7CnZCQ
> 
>  Best,
>  Bruno
> 
> >>
> >>
> >>
> >
>


-- 
-- Guozhang


[jira] [Created] (KAFKA-12236) Add version 1 of meta.properties for KIP-500

2021-01-25 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12236:
---

 Summary: Add version 1 of meta.properties for KIP-500
 Key: KAFKA-12236
 URL: https://issues.apache.org/jira/browse/KAFKA-12236
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


KIP-631 dictates a new meta.properties format with version=1. Unlike the old 
meta.properties, this is required for initialization of the KIP-500 
broker/controller. Unlike with ZK, we use `meta.properties` for the 
initialization of the clusterId.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Contribute to Kafka

2021-01-25 Thread Stefan Baychev
Hello,

My name is Stefan Baychev. Am a Software Engineer with about 9 years of
experience working within the Java ecosystem.

I would like to contribute to the Kafka Open Source Project. Based on the
ways to do this, taken from the web site it says: " Please contact us to be
added to the contributor list with your JIRA account username provided in
the email. ".

Here are my Jira details:

Username: twinkle_
Email: sbayc...@gmail.com
Full Name: Stefan Baychev
Would like to pick anything that is good to start with from this list, if
you have any advice, let me know. Else would pick some from here:

https://issues.apache.org/jira/browse/KAFKA-10885?jql=project%20%3D%20KAFKA%20AND%20labels%20%3D%20newbie%20AND%20status%20%3D%20Open


Best Regards,

Stefan Baychev


[jira] [Resolved] (KAFKA-12228) Kafka won't start with PEM certificate

2021-01-25 Thread Alexey Kashavkin (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kashavkin resolved KAFKA-12228.
--
Resolution: Not A Bug

> Kafka won't start with PEM certificate
> --
>
> Key: KAFKA-12228
> URL: https://issues.apache.org/jira/browse/KAFKA-12228
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 2.7.0
>Reporter: Alexey Kashavkin
>Priority: Major
> Attachments: kafka.log
>
>
> I found that Kafka 2.7.0 supports PEM certificates and I decided to try 
> setting up the broker with DigiCert SSL certificate. I used new options and I 
> did everything like in example in 
> [KIP-651|https://cwiki.apache.org/confluence/display/KAFKA/KIP-651+-+Support+PEM+format+for+SSL+certificates+and+private+key].
>  But I get the error:
> {code:bash}
> [2021-01-20 17:54:55,787] ERROR [KafkaServer id=0] Fatal error during 
> KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
> org.apache.kafka.common.config.ConfigException: Invalid value 
> javax.net.ssl.SSLHandshakeException: no cipher suites in common for 
> configuration A client SSLEngine created with the provided settings can't 
> connect to a server SSLEngine created with those settings.
> at 
> org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:98)
> at 
> org.apache.kafka.common.network.SslChannelBuilder.configure(SslChannelBuilder.java:72)
> at 
> org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:157)
> at 
> org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:97)
> at kafka.network.Processor.(SocketServer.scala:790)
> at kafka.network.SocketServer.newProcessor(SocketServer.scala:415)
> at 
> kafka.network.SocketServer.$anonfun$addDataPlaneProcessors$1(SocketServer.scala:288)
> at 
> kafka.network.SocketServer.addDataPlaneProcessors(SocketServer.scala:287)
> at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1(SocketServer.scala:254)
> at 
> kafka.network.SocketServer.$anonfun$createDataPlaneAcceptorsAndProcessors$1$adapted(SocketServer.scala:251)
> at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:553)
> at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:551)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:920)
> at 
> kafka.network.SocketServer.createDataPlaneAcceptorsAndProcessors(SocketServer.scala:251)
> at kafka.network.SocketServer.startup(SocketServer.scala:125)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:303)
> at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
> at kafka.Kafka$.main(Kafka.scala:82)
> at kafka.Kafka.main(Kafka.scala)
> {code}
> Java is used:
> {code:bash}
> openjdk version "1.8.0_272"
> OpenJDK Runtime Environment (build 1.8.0_272-b10)
> OpenJDK 64-Bit Server VM (build 25.272-b10, mixed mode)
> {code}
> OS is Centos 7.8.2003
> _openssl x509 -in certificate.pem -text :_
> {code:java}
> Certificate:
> ...
> Signature Algorithm: ecdsa-with-SHA384
> ...
> Subject Public Key Info:
> Public Key Algorithm: id-ecPublicKey
> Public-Key: (256 bit)
> {code}
> Log is attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #434

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove `toStruct` and `fromStruct` methods from generated 
protocol classes (#9960)


--
[...truncated 3.57 MB...]
OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentForCompareKeyValueTimestampWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
STARTED

OutputVerifierTest > 
shouldNotAllowNullProducerRecordWithExpectedRecordForCompareValueTimestamp() 
PASSED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() STARTED

OutputVerifierTest > shouldNotAllowNullExpectedRecordForCompareValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() 
STARTED

OutputVerifierTest > shouldNotAllowNullProducerRecordForCompareKeyValue() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueAndTimestampIsEqualForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullReversForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
STARTED

OutputVerifierTest > 
shouldPassIfKeyAndValueIsEqualWithNullForCompareKeyValueWithProducerRecord() 
PASSED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfKeyIsDifferentWithNullForCompareKeyValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
STARTED

OutputVerifierTest > shouldFailIfValueIsDifferentForCompareKeyValueTimestamp() 
PASSED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldPassIfValueAndTimestampIsEqualForCompareValueTimestampWithProducerRecord()
 PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReversForCompareKeyValueTimestamp() PASSED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 STARTED

OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord()
 PASSED

KeyValueStoreFacadeTest > shouldReturnIsOpen() STARTED

KeyValueStoreFacadeTest > shouldReturnIsOpen() PASSED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() STARTED

KeyValueStoreFacadeTest > shouldDeleteAndReturnPlainValue() PASSED

KeyValueStoreFacadeTest > shouldReturnName() STARTED

KeyValueStoreFacadeTest > shouldReturnName() PASSED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutAllWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() STARTED

KeyValueStoreFacadeTest > shouldReturnIsPersistent() PASSED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() STARTED

KeyValueStoreFacadeTest > shouldForwardDeprecatedInit() PASSED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() STARTED

KeyValueStoreFacadeTest > shouldPutIfAbsentWithUnknownTimestamp() PASSED

KeyValueStoreFacadeTest > shouldForwardClose() STARTED

KeyValueStoreFacadeTest > shouldForwardClose() PASSED

KeyValueStoreFacadeTest > shouldForwardFlush() STARTED

KeyValueStoreFacadeTest > shouldForwardFlush() PASSED

KeyValueStoreFacadeTest > shouldForwardInit() STARTED

KeyValueStoreFacadeTest 

Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #404

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Remove `toStruct` and `fromStruct` methods from generated 
protocol classes (#9960)


--
[...truncated 3.55 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2c9041bd, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@20af095b, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@4e5e1d4c, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@28663e51, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true PASSED


Requesting to be added as a contributor

2021-01-25 Thread Sergio Pena Anaya
Hi All,

I would like to be added as a Kafka contributor and get permissions for
JIRA and Confluence.

My username is: spena

Thanks,
- Sergio


Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2021-01-25 Thread Randall Hauch
Thanks for the quick response, Tom, and thanks again for tweaking the
wording on KIP-676. We can absolutely revisit this in the future if it
becomes more of an issue.

If anyone else disagrees, please say so.

Best regards,

Randall

On Mon, Jan 25, 2021 at 9:19 AM Tom Bentley  wrote:

> Hi Randall,
>
> I agree that Kafka Connect's API is more usable given that the user of it
> knows the semantics (and KIP-495 is clear on that point). So perhaps this
> inconsistency isn't enough of a problem that it's worth fixing, at least
> not at the moment.
>
> Kind regards,
>
> Tom
>
> On Fri, Jan 22, 2021 at 6:36 PM Randall Hauch  wrote:
>
> > Thanks for updating the wording in KIP-676.
> >
> > I guess the best path forward depends on what we think needs to change.
> If
> > we think KIP-676 and the changes already made in AK 2.8 are not quite
> > right, then maybe we should address this either by fixing the changes
> (and
> > maybe updating KIP-676 as needed) or reverting the changes if bigger
> > changes are necessary.
> >
> > OTOH, if we think KIP-676 and the changes already made in AK 2.8 are
> > correct but we just need to update Connect to have similar behavior,
> then I
> > don't see why we'd consider reverting the KIP-676 changes in AK 2.8. We
> > could pass another KIP that amends the Connect dynamic logging REST API
> > behavior and fix that in AK 3.0 (if there's not enough time for 2.8).
> >
> > However, it's not clear to me that the Connect dynamic logging REST API
> > behavior is incorrect. The API only allows setting one level at a time
> > (which is different than a Log4J configuration file), and so order
> matters.
> > Consider this case:
> > 1. PUT '{"level": "TRACE"}'
> > http://localhost:8083/admin/loggers/org.apache.kafka.connect
> > 2. PUT '{"level": "DEBUG"}'
> > http://localhost:8083/admin/loggers/org.apache.kafka
> >
> > Is the second call intended to take precedence and override the first, or
> > was the second not taking precedence over and instead augmenting the
> first?
> > KIP-495 really considers the latest call to take precedence over all
> prior
> > ones. This is simple, and ordering the updates can be used to get the
> > desired behavior. For example, swapping the order of these calls easily
> > gets the desired behavior of `org.apache.kafka.connect` (and descendants)
> > are at a TRACE level.
> >
> > IIUC, you seem to suggest that step 2 should not override the fact that
> > step 1 had already set the logger for `org.apache.kafka.connect`. In
> order
> > to do this, we'd have to track all of the dynamic settings made since the
> > VM started, support unsetting (deleting) previously-set levels, and take
> > all of them into account when any changes are made to potentially apply
> the
> > net effect of all dynamic settings across all logger contexts. Plus, we'd
> > need the ability to list the set contexts just to know what could or
> should
> > be deleted, and we'd have to remember the original state defined by the
> log
> > config so that when dynamic logging context levels are deleted we can
> > properly revert to the correct value (if not overridden by a higher-level
> > dynamic context).
> >
> > In short, this dramatically increases the complexity of both the
> > implementation and the UX behavior, and it's not clear whether all that
> > complexity really adds much value. WDYT?
> >
> > Best regards,
> >
> > Randall
> >
> > On Fri, Jan 22, 2021 at 7:58 AM Tom Bentley  wrote:
> >
> > > Hi Randall,
> > >
> > > Thanks for pointing this out. You're quite right about the behaviour of
> > the
> > > LoggingResource, and I've updated the KIP with your suggested wording.
> > >
> > > However, looking at it has made me realise that while KIP-676 means the
> > > logger levels are now hierarchical there's still an inconsistency
> between
> > > how levels are set in Kafka Connect and how it works in the broker.
> > >
> > > In log4j you can configure foo.baz=DEBUG and then foo=INFO and debug
> > > messages from foo.baz.Y will continue to be logged because setting the
> > > parent doesn't override all descendents (the level is inherited). As
> you
> > > know, in Kafka Connect, the way the log setting works is to find all
> the
> > > descendent loggers of foo and apply the given level to them, so setting
> > > foo.baz=DEBUG and then foo=INFO means foo.baz.Y debug messages will not
> > > appear.
> > >
> > > Obviously that behavior for Connect is explicitly stated in KIP-495,
> but
> > I
> > > can't help but feel that the KIP-676 changes not addressing this is a
> > lost
> > > opportunity.
> > >
> > > It's also worth bearing in mind that KIP-653[1] is (hopefully) going to
> > > happen for Kafka 3.0.
> > >
> > > So I wonder if perhaps the pragmatic thing to do would be to:
> > >
> > > 1. Revert the changes for KIP-676 for Kafka 2.8
> > > 2. Pass another KIP, to be implemented for Kafka 3.0, which makes all
> the
> > > Kafka APIs consistent in both respecting the hierarchy and also in what
> > > 

KIP-618: Exactly-Once Support for Source Connectors

2021-01-25 Thread Chris Egerton
Hi all,

I'd like to resurface KIP-618, which has been expanded from its previous
form ("Atomic commit of source connector records and offsets") to something
more general and, hopefully, more powerful: exactly-once support for source
connectors (including fencing-out of zombie tasks). Would appreciate your
thoughts!

https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Exactly-Once+Support+for+Source+Connectors

Cheers,

Chris


Re: [DISCUSS] KIP-618: Atomic commit of source connector records and offsets

2021-01-25 Thread Chris Egerton
Hi Ning,

Apologies for the delay in response. I realized after publishing the KIP
that there were some finer points I hadn't considered in my design and that
it was far from providing exactly-once guarantees. In response to your
questions:

1) The goal of the KIP is to ensure the accuracy of the offsets that the
framework provides to source tasks; if tasks choose to manage offsets
outside of the framework, they're on their own. So, the source records and
their offsets will be written/committed to Kafka, and the task will be
provided them on startup, but it (or really, its predecessor) may not have
had time to do cleanup on resources associated with those records before
being killed.

2) I've cleaned up this section and removed the pseudocode as it seems too
low-level to be worth discussing in a KIP. I'll try to summarize here,
though: task.commit() is not what causes offsets provided to the framework
by tasks to be committed; it's simply a follow-up hook provided out of
convenience to tasks so that they can clean up resources associated with
the most recent batch of records (by ack'ing JMS messages, for example).
The Connect framework uses an internal Kafka topic to store source task
offsets.

3) In order to benefit from the improvements proposed in this KIP, yes, the
single source-of-truth should be the OffsetStorageReader provided to the
task by the Connect framework, at least at startup. After startup, tasks
should ideally bookkeep their own offset progress as each request to read
offsets requires a read to the end of the offsets topic, which can be
expensive in some cases.

I've since expanded the KIP to include general exactly-once support for
source connectors that should cover the points I neglected in my initial
design, so it should be ready for review again.

Cheers,

Chris

On Mon, Jul 27, 2020 at 11:42 PM Ning Zhang  wrote:

> Hello Chris,
>
> That is an interesting KIP. I have a couple of questions:
>
> (1) in section of pseudo-code, what if the failure happens between 4(b)
> and 5(a), meaning after the producer commit the transaction, and before
> task.commitRecord().
>
> (2) in section "source task life time",  what is the difference between
> "commit offset" and "offsets to commit"? Given that the offset storage can
> be a Kafka topic (/KafkaOffsetBackingStore.java) and producer could only
> produce to a kafka topic, are / is the topic(s) the same ? (the topic that
> producer writes offsets to and the topic task.commit() to)
>
> (3) for JDBC source task, it relies on `context.offsetStorageReader()` (
> https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/source/JdbcSourceTask.java#L140)
> to retrieve the previously committed offset (if from a fresh start or
> resume from failure). so it seems that the single-source-of-truth of where
> to consume from last known / committed position stored in offset storage
> (e.g. kafka topic) managed by the periodic task.commit()?
>
> On 2020/05/22 06:20:51, Chris Egerton  wrote:
> > Hi all,
> >
> > I know it's a busy time with the upcoming 2.6 release and I don't expect
> > this to get a lot of traction until that's done, but I've published a KIP
> > for allowing atomic commit of offsets and records for source connectors
> and
> > would appreciate your feedback:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-618%3A+Atomic+commit+of+source+connector+records+and+offsets
> >
> > This feature should make it possible to implement source connectors with
> > exactly-once delivery guarantees, and even allow a wide range of existing
> > source connectors to provide exactly-once delivery guarantees with no
> > changes required.
> >
> > Cheers,
> >
> > Chris
> >
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk15 #453

2021-01-25 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-676: Respect the logging hierarchy

2021-01-25 Thread Tom Bentley
Hi Randall,

I agree that Kafka Connect's API is more usable given that the user of it
knows the semantics (and KIP-495 is clear on that point). So perhaps this
inconsistency isn't enough of a problem that it's worth fixing, at least
not at the moment.

Kind regards,

Tom

On Fri, Jan 22, 2021 at 6:36 PM Randall Hauch  wrote:

> Thanks for updating the wording in KIP-676.
>
> I guess the best path forward depends on what we think needs to change. If
> we think KIP-676 and the changes already made in AK 2.8 are not quite
> right, then maybe we should address this either by fixing the changes (and
> maybe updating KIP-676 as needed) or reverting the changes if bigger
> changes are necessary.
>
> OTOH, if we think KIP-676 and the changes already made in AK 2.8 are
> correct but we just need to update Connect to have similar behavior, then I
> don't see why we'd consider reverting the KIP-676 changes in AK 2.8. We
> could pass another KIP that amends the Connect dynamic logging REST API
> behavior and fix that in AK 3.0 (if there's not enough time for 2.8).
>
> However, it's not clear to me that the Connect dynamic logging REST API
> behavior is incorrect. The API only allows setting one level at a time
> (which is different than a Log4J configuration file), and so order matters.
> Consider this case:
> 1. PUT '{"level": "TRACE"}'
> http://localhost:8083/admin/loggers/org.apache.kafka.connect
> 2. PUT '{"level": "DEBUG"}'
> http://localhost:8083/admin/loggers/org.apache.kafka
>
> Is the second call intended to take precedence and override the first, or
> was the second not taking precedence over and instead augmenting the first?
> KIP-495 really considers the latest call to take precedence over all prior
> ones. This is simple, and ordering the updates can be used to get the
> desired behavior. For example, swapping the order of these calls easily
> gets the desired behavior of `org.apache.kafka.connect` (and descendants)
> are at a TRACE level.
>
> IIUC, you seem to suggest that step 2 should not override the fact that
> step 1 had already set the logger for `org.apache.kafka.connect`. In order
> to do this, we'd have to track all of the dynamic settings made since the
> VM started, support unsetting (deleting) previously-set levels, and take
> all of them into account when any changes are made to potentially apply the
> net effect of all dynamic settings across all logger contexts. Plus, we'd
> need the ability to list the set contexts just to know what could or should
> be deleted, and we'd have to remember the original state defined by the log
> config so that when dynamic logging context levels are deleted we can
> properly revert to the correct value (if not overridden by a higher-level
> dynamic context).
>
> In short, this dramatically increases the complexity of both the
> implementation and the UX behavior, and it's not clear whether all that
> complexity really adds much value. WDYT?
>
> Best regards,
>
> Randall
>
> On Fri, Jan 22, 2021 at 7:58 AM Tom Bentley  wrote:
>
> > Hi Randall,
> >
> > Thanks for pointing this out. You're quite right about the behaviour of
> the
> > LoggingResource, and I've updated the KIP with your suggested wording.
> >
> > However, looking at it has made me realise that while KIP-676 means the
> > logger levels are now hierarchical there's still an inconsistency between
> > how levels are set in Kafka Connect and how it works in the broker.
> >
> > In log4j you can configure foo.baz=DEBUG and then foo=INFO and debug
> > messages from foo.baz.Y will continue to be logged because setting the
> > parent doesn't override all descendents (the level is inherited). As you
> > know, in Kafka Connect, the way the log setting works is to find all the
> > descendent loggers of foo and apply the given level to them, so setting
> > foo.baz=DEBUG and then foo=INFO means foo.baz.Y debug messages will not
> > appear.
> >
> > Obviously that behavior for Connect is explicitly stated in KIP-495, but
> I
> > can't help but feel that the KIP-676 changes not addressing this is a
> lost
> > opportunity.
> >
> > It's also worth bearing in mind that KIP-653[1] is (hopefully) going to
> > happen for Kafka 3.0.
> >
> > So I wonder if perhaps the pragmatic thing to do would be to:
> >
> > 1. Revert the changes for KIP-676 for Kafka 2.8
> > 2. Pass another KIP, to be implemented for Kafka 3.0, which makes all the
> > Kafka APIs consistent in both respecting the hierarchy and also in what
> > updating a logger level means.
> >
> > I don't have a particularly strong preference either way, but it seems
> > better, from a users PoV, if all these logging changes happened in a
> major
> > release and achieved consistency across components going forward.
> >
> > Thoughts?
> >
> > Kind regards,
> >
> > Tom
> >
> > [1]:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
> >
> >
> >
> > On Thu, Jan 21, 2021 at 9:17 PM Randall Hauch  wrote:
> >
> > > Tom, et al.,
> 

[jira] [Created] (KAFKA-12235) ZkAdminManager.describeConfigs returns no config when 2+ configuration keys are specified

2021-01-25 Thread Ivan Yurchenko (Jira)
Ivan Yurchenko created KAFKA-12235:
--

 Summary: ZkAdminManager.describeConfigs returns no config when 2+ 
configuration keys are specified
 Key: KAFKA-12235
 URL: https://issues.apache.org/jira/browse/KAFKA-12235
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.7.0
Reporter: Ivan Yurchenko
Assignee: Ivan Yurchenko


When {{ZkAdminManager.describeConfigs}} receives {{DescribeConfigsResource}} 
with 2 or more {{configurationKeys}} specified, it returns an empty 
configuration.

Here's a test for {{ZkAdminManagerTest}} that reproduces this issue:
  
{code:scala}
@Test
def testDescribeConfigsWithConfigurationKeys(): Unit = {
  EasyMock.expect(zkClient.getEntityConfigs(ConfigType.Topic, 
topic)).andReturn(TestUtils.createBrokerConfig(brokerId, "zk"))
  EasyMock.expect(metadataCache.contains(topic)).andReturn(true)

  EasyMock.replay(zkClient, metadataCache)

  val resources = List(new DescribeConfigsRequestData.DescribeConfigsResource()
.setResourceName(topic)
.setResourceType(ConfigResource.Type.TOPIC.id)
.setConfigurationKeys(List("retention.ms", "retention.bytes", 
"segment.bytes").asJava)
  )

  val adminManager = createAdminManager()
  val results: List[DescribeConfigsResponseData.DescribeConfigsResult] = 
adminManager.describeConfigs(resources, true, true)
  assertEquals(Errors.NONE.code, results.head.errorCode())
  val resultConfigKeys = results.head.configs().asScala.map(r => r.name()).toSet
  assertEquals(Set("retention.ms", "retention.bytes", "segment.bytes"), 
resultConfigKeys)
}

{code}
Works fine with one configuration key, though.

The patch is following shortly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [apache/kafka] KAFKA-10413 Allow even distribution of lost/new tasks when more than one worker

2021-01-25 Thread Ramesh Krishnan
Hi Konstantine,

Please re review the below PR , this is a crucial   piece of functionality
that is blocking us from using incremental co operative rebalance.

https://github.com/apache/kafka/pull/9319


Thanks
Ramesh

On Fri, Dec 18, 2020 at 12:20 AM Ramesh Krishnan 
wrote:

> Hi Team,
>
> Can some help re trigger the build for this PR.
>
> https://github.com/apache/kafka/pull/9319
>
> Thanks
> Ramesh
>


Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-25 Thread Thomas Scott
Thanks Ismael, that's a lot better. I've updated the KIP with this
behaviour instead.

On Mon, Jan 25, 2021 at 11:42 AM Ismael Juma  wrote:

> Thanks for the KIP, Thomas. One question below:
>
> Should an Admin client with this new functionality be used against an old
> > broker that cannot handle these requests then the methods will throw
> > UnsupportedVersionException as per the usual pattern.
>
>
> Did we consider automatically falling back to the single group id request
> if the more efficient one is not supported?
>
> Ismael
>
> On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott  wrote:
>
> > Hi,
> >
> > I'm starting this thread to discuss KIP-709 to extend OffsetFetch
> requests
> > to accept multiple group ids. Please check out the KIP here:
> >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
> >
> > Any comments much appreciated.
> >
> > thanks,
> >
> > Tom
> >
>


Jenkins build is back to normal : Kafka » kafka-trunk-jdk11 #433

2021-01-25 Thread Apache Jenkins Server
See 




Re: Unsubscribe from Notifications

2021-01-25 Thread Bruno Cadonna

Hi Team HirinGuru,

if you want to unsubscribe, see the description on how to do that on 
https://kafka.apache.org/contact


Best,
Bruno

On 22.01.21 07:41, Hiringuru wrote:

Hi,

Kindly unsubscribe our email from your daily notification. We don't want to 
receive any further notification.

I would be highly appreciated if you can check and remove our email from your 
list.

Thanks
Team HirinGuru



Re: [DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-25 Thread Ismael Juma
Thanks for the KIP, Thomas. One question below:

Should an Admin client with this new functionality be used against an old
> broker that cannot handle these requests then the methods will throw
> UnsupportedVersionException as per the usual pattern.


Did we consider automatically falling back to the single group id request
if the more efficient one is not supported?

Ismael

On Mon, Jan 25, 2021 at 3:34 AM Thomas Scott  wrote:

> Hi,
>
> I'm starting this thread to discuss KIP-709 to extend OffsetFetch requests
> to accept multiple group ids. Please check out the KIP here:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258
>
> Any comments much appreciated.
>
> thanks,
>
> Tom
>


[DISCUSS] KIP-709: Extend OffsetFetch requests to accept multiple group ids

2021-01-25 Thread Thomas Scott
Hi,

I'm starting this thread to discuss KIP-709 to extend OffsetFetch requests
to accept multiple group ids. Please check out the KIP here:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258

Any comments much appreciated.

thanks,

Tom


[jira] [Created] (KAFKA-12234) Extend OffsetFetch requests to accept multiple group ids.

2021-01-25 Thread Tom Scott (Jira)
Tom Scott created KAFKA-12234:
-

 Summary: Extend OffsetFetch requests to accept multiple group ids.
 Key: KAFKA-12234
 URL: https://issues.apache.org/jira/browse/KAFKA-12234
 Project: Kafka
  Issue Type: Improvement
Affects Versions: 2.8.0
Reporter: Tom Scott


More details are in the KIP: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=173084258



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-trunk-jdk8 #403

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: MessageUtil: remove some deadcode (#9931)

[github] MINOR: Fix typo in Utils#toPositive (#9943)


--
[...truncated 3.56 MB...]
MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@58d98cb9, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5c8ea7f4, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@458a258c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1a737e4e, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@1cd29773, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@3c329997, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@12d0f0c0, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2ea7d0ac, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5b4df9b4, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fb73970, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fb73970, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4812e23a, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@4812e23a, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5a233e05, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5a233e05, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@20b01353, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@20b01353, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e5c5241, 
timestamped = false, caching = false, logging = false 

Build failed in Jenkins: Kafka » kafka-trunk-jdk15 #452

2021-01-25 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: MessageUtil: remove some deadcode (#9931)


--
[...truncated 3.58 MB...]

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b898b4b, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@b898b4b, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@79d24a0c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@79d24a0c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23ea5d0c, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@23ea5d0c, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@342a1707, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@342a1707, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5a53a0d6, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5a53a0d6, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5ec80009, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@5ec80009, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2121bcb3, 
timestamped = false, caching = false, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.WindowStoreBuilder@2121bcb3, 
timestamped = false, caching = false, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@52389edf, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@52389edf, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e0132b1, 
timestamped = false, caching = true, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@2e0132b1, 
timestamped = false, caching = true, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fcdf39b, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@5fcdf39b, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7f8b90, 
timestamped = false, caching = true, logging = false STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@7f8b90, 
timestamped = false, caching = true, logging = false PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@17d7090, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@17d7090, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@25a27f0b, 
timestamped = false, caching = false, logging = true STARTED

MockProcessorContextStateStoreTest > builder = 
org.apache.kafka.streams.state.internals.SessionStoreBuilder@25a27f0b, 
timestamped = false, caching = false, logging = true PASSED

MockProcessorContextStateStoreTest > builder = 

[jira] [Resolved] (KAFKA-12179) Install Kafka on IBM Cloud

2021-01-25 Thread Mickael Maison (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mickael Maison resolved KAFKA-12179.

Fix Version/s: (was: 2.7.0)
   Resolution: Invalid

> Install Kafka on IBM Cloud
> --
>
> Key: KAFKA-12179
> URL: https://issues.apache.org/jira/browse/KAFKA-12179
> Project: Kafka
>  Issue Type: Improvement
>  Components: docs, documentation
>Affects Versions: 2.7.0
>Reporter: Muhammad Arif
>Priority: Trivial
> Attachments: Kafka1.png, Kafka2.png, KafkaVerify1.png, 
> KafkaVerify2.png, KafkaVerify3.png, KafkaVerify4.png, Kubernetes1.png, 
> Kubernetes2.png, Kubernetes3.png, Kubernetes4.png, Kubernetes5.png, 
> Kubernetes6.png, Kubernetes7.png, Storage1.png, Storage2.png
>
>
> *Installing Kafka on IBM Cloud*
>  
> This document will describe how to install Kafka on IBM Cloud using 
> Kubernetes Service.
>  
> *+Contents+*
>  # Introduction
>  # Provision Kubernetes Cluster
>  # Deploy IBM Cloud Block-Storage Plugin
>  # Deploy Kafka
>  # Verifying the Kafka Installation
>  *Introduction*
> To complete this tutorial, you should have an IBM Cloud account, if you do 
> not have one, please [register/signup|https://cloud.ibm.com/registration] 
> here.
> For installing Kafka, we have used the Kubernetes cluster, and used the IBM 
> Cloud Block-Storage plugin for our persistent volume. Upon the completion of 
> this tutorial, you would have the Kafka up and running on the Kubernetes 
> cluster.
>  # Provision the Kubernetes cluster, if you have already setup one, skip to 
> step 2.
>  # Deploy the IBM Cloud Block-Storage Plugin to the created cluster, if you 
> have already done this, skip to step 3.
>  # Deploy the Kafka.
> *Provision Kubernetes Cluster*
>  * Click on the *Catalog* button on top center. Open 
> [Catalog|https://cloud.ibm.com/catalog].  !Kubernetes1.png!
>  * In search catalog box, search for *Kubernetes Service* and click on it.  
> !Kubernetes2.png!
>  * You are now at Create Kubernetes Cluster page, there you have the two 
> plans to create the Kubernetes cluster, either using free plan or standard 
> plan.
>  
> *Option1- Using Free Plan:*
>  * Select Pricing Plan as “*Free*”.
>  * Click on *Create*. !Kubernetes3.png!
>  * Wait a few minutes, and then your Cloud would be ready.
>  
> *_Note_*_: Please be careful when choosing free cluster, as your pods could 
> be stuck at pending state due to insufficient compute and memory resources, 
> if you face such kind of issue please increase your resource by creating them 
> choosing the standard plan._
>  
> *Option2- Using Standard Plan:*
>  * Select Pricing Plan as “*Standard*” 
>  * Select your Kubernetes Version as latest available or desired one by 
> application. In our example we have set it to be ‘*1.18.13*’.
>  * !Kubernetes4.png!
>  * Select Infrastructure as “*Classic*”
>  * Leave Resource Group to “*Default*”
>  * Select Geography as “*Asia*” or your desired one.
>  * Select Availability as “*Single Zone*”.
> _This option allows you to create the resources in either single or multi 
> availability zones. Multi availability zone provides you the option to create 
> the resources in more than one availability zones so in case of catastrophe 
> it could sustain the disaster and continues to work._
>  * Select Worker Zone as *Chennai 01.  !Kubernetes5.png!*
>  *  In Worker Pool, input your desired number of nodes as “*3*”
>  * Leave the Encrypt Local Disk option to “*On*”
>  * Select Master Service Endpoint to “*Both private and public endpoints*” 
> !Kubernetes6.png!
>  * Give your cluster-name as “*Kafka-Cluster*”
>  * Provide *tags* to your cluster and click on *Create*.
>  * Wait a few minutes, and then your Cloud would be ready.  !Kubernetes7.png!
> *Deploy IBM Cloud Block-Storage Plugin*
>  * Click on the *Catalog* button on top center.
>  * In search catalog box, search for *IBM Cloud Block Storage Plug-in* and 
> click on it  !Storage1.png!
>  * Select your cluster as "*Kafka-Cluster*"
>  * Provide *Target Namespace* as “*kafka-storage*”, leave *name* and 
> *resource group* to *default*
>  * Click on *Install !Storage2.png!*
>  
> *Deploy Kafka*
>  * Again go to the *Catalog* and search for Kafka.  !Kafka1.png!
>  * Provide the details as below.  !Kafka2.png!
>  * Target: *IBM Kubernetes Service*
>  * Method: *Helm chart*
>  * Kubernetes cluster: *Kafka-Cluster*(jp-tok)
>  * Target namespace: *kafka*
>  * Workspace: *kafka-01-07-2021*
>  * Resource group: *Default*
>  * Click on *Parameters with Default Values*, you can set the deployment 
> values or use the default ones, we have used the default ones in this example.
>  * Click on *Install*.
> *Verifying the Kafka Installation*
>  * Go to the *Resources List* in the Left Navigation Menu and click on ** 
> *Kubernetes* and then *Clusters. 

[GitHub] [kafka-site] miguno commented on a change in pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-25 Thread GitBox


miguno commented on a change in pull request #324:
URL: https://github.com/apache/kafka-site/pull/324#discussion_r563562851



##
File path: 27/ops.html
##
@@ -553,7 +539,558 @@ 6.3 Kafka Configuration
+  6.3 Geo-Replication (Cross-Cluster Data 
Mirroring)
+
+  Geo-Replication 
Overview
+
+  
+Kafka administrators can define data flows that cross the boundaries of 
individual Kafka clusters, data centers, or geo-regions. Such event streaming 
setups are often needed for organizational, technical, or legal requirements. 
Common scenarios include:
+  
+
+  
+Geo-replication
+Disaster recovery
+Feeding edge clusters into a central, aggregate cluster
+Physical isolation of clusters (such as production vs. testing)
+Cloud migration or hybrid cloud deployments
+Legal and compliance requirements
+  
+
+  
+Administrators can set up such inter-cluster data flows with Kafka's 
MirrorMaker (version 2), a tool to replicate data between different Kafka 
environments in a streaming manner. MirrorMaker is built on top of the Kafka 
Connect framework and supports features such as:
+  
+
+  
+Replicates topics (data plus configurations)
+Replicates consumer groups including offsets to migrate applications 
between clusters
+Replicates ACLs
+Preserves partitioning
+Automatically detects new topics and partitions
+Provides a wide range of metrics, such as end-to-end replication 
latency across multiple data centers/clusters
+Fault-tolerant and horizontally scalable operations
+  
+
+  
+  Note: Geo-replication with MirrorMaker replicates data across Kafka 
clusters. This inter-cluster replication is different from Kafka's intra-cluster replication, which replicates data within 
the same Kafka cluster.
+  
+
+  What Are Replication 
Flows
+
+  
+With MirrorMaker, Kafka administrators can replicate topics, topic 
configurations, consumer groups and their offsets, and ACLs from one or more 
source Kafka clusters to one or more target Kafka clusters, i.e., across 
cluster environments. In a nutshell, MirrorMaker consumes data from the source 
cluster with source connectors, and then replicates the data by producing to 
the target cluster with sink connectors.

Review comment:
   Thanks, @ryannedolan. Text updated.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] miguno commented on a change in pull request #324: KAFKA-8930: MirrorMaker v2 documentation

2021-01-25 Thread GitBox


miguno commented on a change in pull request #324:
URL: https://github.com/apache/kafka-site/pull/324#discussion_r563562309



##
File path: 27/ops.html
##
@@ -553,7 +539,558 @@ 6.3 Kafka Configuration
+  6.3 Geo-Replication (Cross-Cluster Data 
Mirroring)
+
+  Geo-Replication 
Overview
+
+  
+Kafka administrators can define data flows that cross the boundaries of 
individual Kafka clusters, data centers, or geo-regions. Such event streaming 
setups are often needed for organizational, technical, or legal requirements. 
Common scenarios include:
+  
+
+  
+Geo-replication
+Disaster recovery
+Feeding edge clusters into a central, aggregate cluster
+Physical isolation of clusters (such as production vs. testing)
+Cloud migration or hybrid cloud deployments
+Legal and compliance requirements
+  
+
+  
+Administrators can set up such inter-cluster data flows with Kafka's 
MirrorMaker (version 2), a tool to replicate data between different Kafka 
environments in a streaming manner. MirrorMaker is built on top of the Kafka 
Connect framework and supports features such as:
+  
+
+  
+Replicates topics (data plus configurations)
+Replicates consumer groups including offsets to migrate applications 
between clusters
+Replicates ACLs
+Preserves partitioning
+Automatically detects new topics and partitions
+Provides a wide range of metrics, such as end-to-end replication 
latency across multiple data centers/clusters
+Fault-tolerant and horizontally scalable operations
+  
+
+  
+  Note: Geo-replication with MirrorMaker replicates data across Kafka 
clusters. This inter-cluster replication is different from Kafka's intra-cluster replication, which replicates data within 
the same Kafka cluster.
+  
+
+  What Are Replication 
Flows
+
+  
+With MirrorMaker, Kafka administrators can replicate topics, topic 
configurations, consumer groups and their offsets, and ACLs from one or more 
source Kafka clusters to one or more target Kafka clusters, i.e., across 
cluster environments. In a nutshell, MirrorMaker consumes data from the source 
cluster with source connectors, and then replicates the data by producing to 
the target cluster with sink connectors.
+  
+
+  
+These directional flows from source to target clusters are called 
replication flows. They are defined with the format 
{source_cluster}->{target_cluster} in the MirrorMaker 
configuration file as described later. Administrators can create complex 
replication topologies based on these flows.
+  
+
+  
+Here are some example patterns:
+  
+
+  
+Active/Active high availability deployments: A->B, 
B->A
+Active/Passive or Active/Standby high availability deployments: 
A->B
+Aggregation (e.g., from many clusters to one): A->K, B->K, 
C->K
+Fan-out (e.g., from one to many clusters): K->A, K->B, 
K->C
+Forwarding: A->B, B->C, C->D
+  
+
+  
+By default, a flow replicates all topics and consumer groups. However, 
each replication flow can be configured independently. For instance, you can 
define that only specific topics or consumer groups are replicated from the 
source cluster to the target cluster.
+  
+
+  
+Here is a first example on how to configure data replication from a 
primary cluster to a secondary cluster (an 
active/passive setup):
+  
+
+# Basic settings
+clusters = primary, secondary
+primary.bootstrap.servers = broker3-primary:9092
+secondary.bootstrap.servers = broker5-secondary:9092
+
+# Define replication flows
+primary->secondary.enable = true
+primary->secondary.topics = foobar-topic, quux-.*
+
+
+
+  Configuring 
Geo-Replication
+
+  
+The following sections describe how to configure and run a dedicated 
MirrorMaker cluster. If you want to run MirrorMaker within an existing Kafka 
Connect cluster or other supported deployment setups, please refer to https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0;>KIP-382:
 MirrorMaker 2.0 and be aware that the names of configuration settings may 
vary between deployment modes.
+  
+
+  
+Beyond what's covered in the following sections, further examples and 
information on configuration settings are available at:
+  
+
+  
+ https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorMakerConfig.java;>MirrorMakerConfig,
 https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/MirrorConnectorConfig.java;>MirrorConnectorConfig
+ https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultTopicFilter.java;>DefaultTopicFilter
 for topics, https://github.com/apache/kafka/blob/trunk/connect/mirror/src/main/java/org/apache/kafka/connect/mirror/DefaultGroupFilter.java;>DefaultGroupFilter
 for consumer groups
+ Example configuration settings in