[kafka-site] branch asf-site updated: Add atguigu(http://www.atguigu.com/)to the list pf the "Powered By ❤" (#418)

2022-06-27 Thread guozhang
This is an automated email from the ASF dual-hosted git repository.

guozhang pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/kafka-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 375a2ec4 Add atguigu(http://www.atguigu.com/)to the list pf the 
"Powered By ❤" (#418)
375a2ec4 is described below

commit 375a2ec493b1b1e0f5fbf3e275c90f0b09bc982b
Author: realdengziqi <42276568+realdengz...@users.noreply.github.com>
AuthorDate: Tue Jun 28 11:28:03 2022 +0800

Add atguigu(http://www.atguigu.com/)to the list pf the "Powered By ❤" (#418)

Reviewers: Guozhang Wang 
---
 images/powered-by/atguigu.png | Bin 0 -> 39711 bytes
 powered-by.html   |   5 +
 2 files changed, 5 insertions(+)

diff --git a/images/powered-by/atguigu.png b/images/powered-by/atguigu.png
new file mode 100644
index ..4e72d8b5
Binary files /dev/null and b/images/powered-by/atguigu.png differ
diff --git a/powered-by.html b/powered-by.html
index ce172902..c5922bca 100644
--- a/powered-by.html
+++ b/powered-by.html
@@ -694,6 +694,11 @@
 "logo": "atruvia_logo_online_rgb.png",
 "logoBgColor": "#d4f2f5",
 "description": "At Atruvia we use Apache Kafka to share events within 
the modern banking platform."
+}, {
+"link": "http://www.atguigu.com/;,
+"logo": "atguigu.png",
+"logoBgColor": "#ff",
+"description": "In our real-time data warehouse, apache kafka is used 
as a reliable distributed message queue, which allows us to build a highly 
available analysis system."
 }];
 
 



[kafka] branch trunk updated: KAFKA-13963: Clarified TopologyDescription JavaDoc for Processors API forward() calls (#12293)

2022-06-27 Thread mjsax
This is an automated email from the ASF dual-hosted git repository.

mjsax pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new 025e47b833 KAFKA-13963: Clarified TopologyDescription JavaDoc for 
Processors API forward() calls (#12293)
025e47b833 is described below

commit 025e47b8334eb7125c7fdd2f725a2fef3c98344c
Author: Tom Kaszuba 
AuthorDate: Tue Jun 28 03:50:34 2022 +0200

KAFKA-13963: Clarified TopologyDescription JavaDoc for Processors API 
forward() calls (#12293)

Reviewers: Matthias J. Sax 
---
 streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java | 1 +
 1 file changed, 1 insertion(+)

diff --git 
a/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java 
b/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
index 6f26779c96..deea3a465c 100644
--- a/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
+++ b/streams/src/main/java/org/apache/kafka/streams/TopologyDescription.java
@@ -30,6 +30,7 @@ import java.util.regex.Pattern;
  * In contrast, two sub-topologies are not connected but can be linked to each 
other via topics, i.e., if one
  * sub-topology {@link Topology#addSink(String, String, String...) writes} 
into a topic and another sub-topology
  * {@link Topology#addSource(String, String...) reads} from the same topic.
+ * Message {@link ProcessorContext#forward(Object, Object) forwards} using 
custom Processors and Transformers are not considered in the topology graph.
  * 
  * When {@link KafkaStreams#start()} is called, different sub-topologies will 
be constructed and executed as independent
  * {@link StreamTask tasks}.



[kafka] branch trunk updated: MINOR: Support KRaft in GroupAuthorizerIntegrationTest (#12336)

2022-06-27 Thread jgus
This is an automated email from the ASF dual-hosted git repository.

jgus pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git


The following commit(s) were added to refs/heads/trunk by this push:
 new d654bc1b15 MINOR: Support KRaft in GroupAuthorizerIntegrationTest 
(#12336)
d654bc1b15 is described below

commit d654bc1b15740acf8f1647a0f4533f4cd7f71271
Author: Jason Gustafson 
AuthorDate: Mon Jun 27 16:01:15 2022 -0700

MINOR: Support KRaft in GroupAuthorizerIntegrationTest (#12336)

Support KRaft in `GroupAuthorizerIntegrationTest`.

Reviewers: David Arthur 
---
 .../kafka/api/AuthorizerIntegrationTest.scala  | 39 ++
 .../kafka/api/GroupAuthorizerIntegrationTest.scala | 60 +++---
 .../kafka/server/QuorumTestHarness.scala   | 19 ---
 .../kafka/integration/KafkaServerTestHarness.scala | 10 
 4 files changed, 78 insertions(+), 50 deletions(-)

diff --git 
a/core/src/test/scala/integration/kafka/api/AuthorizerIntegrationTest.scala 
b/core/src/test/scala/integration/kafka/api/AuthorizerIntegrationTest.scala
index fdc18c..a109ae8ce4 100644
--- a/core/src/test/scala/integration/kafka/api/AuthorizerIntegrationTest.scala
+++ b/core/src/test/scala/integration/kafka/api/AuthorizerIntegrationTest.scala
@@ -85,9 +85,8 @@ object AuthorizerIntegrationTest {
   class PrincipalBuilder extends DefaultKafkaPrincipalBuilder(null, null) {
 override def build(context: AuthenticationContext): KafkaPrincipal = {
   context.listenerName match {
-case BrokerListenerName => BrokerPrincipal
+case BrokerListenerName | ControllerListenerName => BrokerPrincipal
 case ClientListenerName => ClientPrincipal
-case ControllerListenerName => BrokerPrincipal
 case listenerName => throw new IllegalArgumentException(s"No principal 
mapped to listener $listenerName")
   }
 }
@@ -152,32 +151,32 @@ class AuthorizerIntegrationTest extends BaseRequestTest {
   consumerConfig.setProperty(ConsumerConfig.GROUP_ID_CONFIG, group)
 
   override def brokerPropertyOverrides(properties: Properties): Unit = {
+properties.put(KafkaConfig.BrokerIdProp, brokerId.toString)
+addNodeProperties(properties)
+  }
+
+  override def kraftControllerConfigs(): collection.Seq[Properties] = {
+val controllerConfigs = super.kraftControllerConfigs()
+controllerConfigs.foreach(addNodeProperties)
+controllerConfigs
+  }
+
+  private def addNodeProperties(properties: Properties): Unit = {
 if (isKRaftTest()) {
   properties.put(KafkaConfig.AuthorizerClassNameProp, 
classOf[StandardAuthorizer].getName)
-  properties.put(StandardAuthorizer.SUPER_USERS_CONFIG, 
BrokerPrincipal.toString())
+  properties.put(StandardAuthorizer.SUPER_USERS_CONFIG, 
BrokerPrincipal.toString)
 } else {
   properties.put(KafkaConfig.AuthorizerClassNameProp, 
classOf[AclAuthorizer].getName)
 }
-properties.put(KafkaConfig.BrokerIdProp, brokerId.toString)
+
 properties.put(KafkaConfig.OffsetsTopicPartitionsProp, "1")
 properties.put(KafkaConfig.OffsetsTopicReplicationFactorProp, "1")
 properties.put(KafkaConfig.TransactionsTopicPartitionsProp, "1")
 properties.put(KafkaConfig.TransactionsTopicReplicationFactorProp, "1")
 properties.put(KafkaConfig.TransactionsTopicMinISRProp, "1")
-properties.put(BrokerSecurityConfigs.PRINCIPAL_BUILDER_CLASS_CONFIG,
-  classOf[PrincipalBuilder].getName)
+properties.put(BrokerSecurityConfigs.PRINCIPAL_BUILDER_CLASS_CONFIG, 
classOf[PrincipalBuilder].getName)
   }
 
-  override def kraftControllerConfigs(): Seq[Properties] = {
-val controllerConfigs = Seq(new Properties())
-controllerConfigs.foreach { properties =>
-  properties.put(KafkaConfig.AuthorizerClassNameProp, 
classOf[StandardAuthorizer].getName())
-  properties.put(StandardAuthorizer.SUPER_USERS_CONFIG, 
BrokerPrincipal.toString())
-  properties.put(BrokerSecurityConfigs.PRINCIPAL_BUILDER_CLASS_CONFIG,
-classOf[PrincipalBuilder].getName)
-}
-controllerConfigs
-  }
 
   val requestKeyToError = (topicNames: Map[Uuid, String], version: Short) => 
Map[ApiKeys, Nothing => Errors](
 ApiKeys.METADATA -> ((resp: requests.MetadataResponse) => 
resp.errors.asScala.find(_._1 == topic).getOrElse(("test", Errors.NONE))._2),
@@ -2574,14 +2573,6 @@ class AuthorizerIntegrationTest extends BaseRequestTest {
 }
   }
 
-  private def addAndVerifyAcls(acls: Set[AccessControlEntry], resource: 
ResourcePattern): Unit = {
-TestUtils.addAndVerifyAcls(brokers, acls, resource, controllerServers)
-  }
-
-  private def removeAndVerifyAcls(acls: Set[AccessControlEntry], resource: 
ResourcePattern): Unit = {
-TestUtils.removeAndVerifyAcls(brokers, acls, resource, controllerServers)
-  }
-
   private def consumeRecords(consumer: Consumer[Array[Byte], Array[Byte]],
  numRecords: Int = 1,