This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new 3e4e7d51888 Docs sync done from apache/pulsar(#ec74618)
3e4e7d51888 is described below

commit 3e4e7d51888e9ea57aafec4455a5a2e9e6a1fa80
Author: Pulsar Site Updater <[email protected]>
AuthorDate: Tue Oct 11 12:01:43 2022 +0000

    Docs sync done from apache/pulsar(#ec74618)
---
 site2/website-next/docs/io-kafka-sink.md                  |  9 ++++++++-
 site2/website-next/docs/io-kafka-source.md                | 12 +++++++++---
 site2/website-next/static/swagger/restApiVersions.json    | 10 +++++-----
 .../versioned_docs/version-2.10.x/io-kafka-sink.md        | 15 ++++++++++-----
 .../versioned_docs/version-2.10.x/io-kafka-source.md      | 12 +++++++++---
 5 files changed, 41 insertions(+), 17 deletions(-)

diff --git a/site2/website-next/docs/io-kafka-sink.md 
b/site2/website-next/docs/io-kafka-sink.md
index a8596c08588..3506ba47319 100644
--- a/site2/website-next/docs/io-kafka-sink.md
+++ b/site2/website-next/docs/io-kafka-sink.md
@@ -17,13 +17,20 @@ The configuration of the Kafka sink connector has the 
following parameters.
 | Name | Type| Required | Default | Description 
 |------|----------|---------|-------------|-------------|
 |  `bootstrapServers` |String| true | " " (empty string) | A comma-separated 
list of host and port pairs for establishing the initial connection to the 
Kafka cluster. |
+|  `securityProtocol` |String| false | " " (empty string) | The protocol used 
to communicate with Kafka brokers. |
+|  `saslMechanism` |String| false | " " (empty string) | The SASL mechanism 
used for Kafka client connections. |
+|  `saslJaasConfig` |String| false | " " (empty string) | The JAAS login 
context parameters for SASL connections in the format used by JAAS 
configuration files. |
+|  `sslEnabledProtocols` |String| false | " " (empty string) | The list of 
protocols enabled for SSL connections. |
+|  `sslEndpointIdentificationAlgorithm` |String| false | " " (empty string) | 
The endpoint identification algorithm to validate server hostnames using a 
server certificate. |
+|  `sslTruststoreLocation` |String| false | " " (empty string) | The location 
of the trust store file. |
+|  `sslTruststorePassword` |String| false | " " (empty string) | The password 
of the trust store file. |
 |`acks`|String|true|" " (empty string) |The number of acknowledgments that the 
producer requires the leader to receive before a request completes. <br />This 
controls the durability of the sent records.
 |`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts 
to batch records together before sending them to brokers.
 |`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in 
bytes.
 |`topic`|String|true|" " (empty string) |The Kafka topic which receives 
messages from Pulsar.
 | `keyDeserializationClass` | String|false | 
org.apache.kafka.common.serialization.StringSerializer | The serializer class 
for Kafka producers to serialize keys.
 | `valueDeserializationClass` | String|false | 
org.apache.kafka.common.serialization.ByteArraySerializer | The serializer 
class for Kafka producers to serialize values.<br /><br />The serializer is set 
by a specific implementation of 
[`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java).
-|`producerConfigProperties`|Map|false|" " (empty string)|The producer 
configuration properties to be passed to producers. <br /><br />**Note:  other 
properties specified in the connector configuration file take precedence over 
this configuration**.
+|`producerConfigProperties`|Map|false|" " (empty string)|The producer 
configuration properties to be passed to producers. <br /><br />**Note: other 
properties specified in the connector configuration file take precedence over 
this configuration**.
 
 
 ### Example
diff --git a/site2/website-next/docs/io-kafka-source.md 
b/site2/website-next/docs/io-kafka-source.md
index d3fba5697ae..44f5016df91 100644
--- a/site2/website-next/docs/io-kafka-source.md
+++ b/site2/website-next/docs/io-kafka-source.md
@@ -17,6 +17,13 @@ The configuration of the Kafka source connector has the 
following properties.
 | Name | Type| Required | Default | Description 
 |------|----------|---------|-------------|-------------|
 |  `bootstrapServers` |String| true | " " (empty string) | A comma-separated 
list of host and port pairs for establishing the initial connection to the 
Kafka cluster. |
+|  `securityProtocol` |String| false | " " (empty string) | The protocol used 
to communicate with Kafka brokers. |
+|  `saslMechanism` |String| false | " " (empty string) | The SASL mechanism 
used for Kafka client connections. |
+|  `saslJaasConfig` |String| false | " " (empty string) | The JAAS login 
context parameters for SASL connections in the format used by JAAS 
configuration files. |
+|  `sslEnabledProtocols` |String| false | " " (empty string) | The list of 
protocols enabled for SSL connections. |
+|  `sslEndpointIdentificationAlgorithm` |String| false | " " (empty string) | 
The endpoint identification algorithm to validate server hostnames using a 
server certificate. |
+|  `sslTruststoreLocation` |String| false | " " (empty string) | The location 
of the trust store file. |
+|  `sslTruststorePassword` |String| false | " " (empty string) | The password 
of the trust store file. |
 | `groupId` |String| true | " " (empty string) | A unique string that 
identifies the group of consumer processes to which this consumer belongs. |
 | `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch 
response. |
 | `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's 
offset is periodically committed in the background.<br /><br /> This committed 
offset is used when the process fails as the position from which a new consumer 
begins. |
@@ -100,9 +107,8 @@ This example describes how to use the Kafka source 
connector to feed data from K
 
 #### Steps
 
-1. Download and start the Confluent Platform.
-
-For details, see the 
[documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp)
 to install the Kafka service locally.
+1. Download and start the Confluent Platform. 
+   For details, see the 
[documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp)
 to install the Kafka service locally.
 
 2. Pull a Pulsar image and start Pulsar in standalone mode.
 
diff --git a/site2/website-next/static/swagger/restApiVersions.json 
b/site2/website-next/static/swagger/restApiVersions.json
index f036dbe6692..29099ec9b0e 100644
--- a/site2/website-next/static/swagger/restApiVersions.json
+++ b/site2/website-next/static/swagger/restApiVersions.json
@@ -33,7 +33,7 @@
             ]
         }
     ],
-    "2.3.1": [
+    "2.3.0": [
         {
             "version": "v2",
             "fileName": [
@@ -49,7 +49,7 @@
             ]
         }
     ],
-    "2.3.2": [
+    "2.3.1": [
         {
             "version": "v2",
             "fileName": [
@@ -65,7 +65,7 @@
             ]
         }
     ],
-    "2.3.0": [
+    "2.3.2": [
         {
             "version": "v2",
             "fileName": [
@@ -129,7 +129,7 @@
             ]
         }
     ],
-    "2.5.1": [
+    "2.5.0": [
         {
             "version": "v2",
             "fileName": [
@@ -145,7 +145,7 @@
             ]
         }
     ],
-    "2.5.0": [
+    "2.5.1": [
         {
             "version": "v2",
             "fileName": [
diff --git a/site2/website-next/versioned_docs/version-2.10.x/io-kafka-sink.md 
b/site2/website-next/versioned_docs/version-2.10.x/io-kafka-sink.md
index ce8967e0461..80756eb4e32 100644
--- a/site2/website-next/versioned_docs/version-2.10.x/io-kafka-sink.md
+++ b/site2/website-next/versioned_docs/version-2.10.x/io-kafka-sink.md
@@ -19,13 +19,20 @@ The configuration of the Kafka sink connector has the 
following parameters.
 | Name | Type| Required | Default | Description 
 |------|----------|---------|-------------|-------------|
 |  `bootstrapServers` |String| true | " " (empty string) | A comma-separated 
list of host and port pairs for establishing the initial connection to the 
Kafka cluster. |
+|  `securityProtocol` |String| false | " " (empty string) | The protocol used 
to communicate with Kafka brokers. |
+|  `saslMechanism` |String| false | " " (empty string) | The SASL mechanism 
used for Kafka client connections. |
+|  `saslJaasConfig` |String| false | " " (empty string) | The JAAS login 
context parameters for SASL connections in the format used by JAAS 
configuration files. |
+|  `sslEnabledProtocols` |String| false | " " (empty string) | The list of 
protocols enabled for SSL connections. |
+|  `sslEndpointIdentificationAlgorithm` |String| false | " " (empty string) | 
The endpoint identification algorithm to validate server hostnames using a 
server certificate. |
+|  `sslTruststoreLocation` |String| false | " " (empty string) | The location 
of the trust store file. |
+|  `sslTruststorePassword` |String| false | " " (empty string) | The password 
of the trust store file. |
 |`acks`|String|true|" " (empty string) |The number of acknowledgments that the 
producer requires the leader to receive before a request completes. <br />This 
controls the durability of the sent records.
 |`batchsize`|long|false|16384L|The batch size that a Kafka producer attempts 
to batch records together before sending them to brokers.
 |`maxRequestSize`|long|false|1048576L|The maximum size of a Kafka request in 
bytes.
 |`topic`|String|true|" " (empty string) |The Kafka topic which receives 
messages from Pulsar.
 | `keyDeserializationClass` | String|false | 
org.apache.kafka.common.serialization.StringSerializer | The serializer class 
for Kafka producers to serialize keys.
 | `valueDeserializationClass` | String|false | 
org.apache.kafka.common.serialization.ByteArraySerializer | The serializer 
class for Kafka producers to serialize values.<br /><br />The serializer is set 
by a specific implementation of 
[`KafkaAbstractSink`](https://github.com/apache/pulsar/blob/master/pulsar-io/kafka/src/main/java/org/apache/pulsar/io/kafka/KafkaAbstractSink.java).
-|`producerConfigProperties`|Map|false|" " (empty string)|The producer 
configuration properties to be passed to producers. <br /><br />**Note:  other 
properties specified in the connector configuration file take precedence over 
this configuration**.
+|`producerConfigProperties`|Map|false|" " (empty string)|The producer 
configuration properties to be passed to producers. <br /><br />**Note: other 
properties specified in the connector configuration file take precedence over 
this configuration**.
 
 
 ### Example
@@ -35,7 +42,6 @@ Before using the Kafka sink connector, you need to create a 
configuration file t
 * JSON 
 
   ```json
-  
   {
      "configs": {
         "bootstrapServers": "localhost:6667",
@@ -52,12 +58,11 @@ Before using the Kafka sink connector, you need to create a 
configuration file t
         }
      }
   }
+  ```
 
 * YAML
   
-  ```
-
-yaml
+  ```yaml
   configs:
       bootstrapServers: "localhost:6667"
       topic: "test"
diff --git 
a/site2/website-next/versioned_docs/version-2.10.x/io-kafka-source.md 
b/site2/website-next/versioned_docs/version-2.10.x/io-kafka-source.md
index dd6000aa0bd..35db8537968 100644
--- a/site2/website-next/versioned_docs/version-2.10.x/io-kafka-source.md
+++ b/site2/website-next/versioned_docs/version-2.10.x/io-kafka-source.md
@@ -19,6 +19,13 @@ The configuration of the Kafka source connector has the 
following properties.
 | Name | Type| Required | Default | Description 
 |------|----------|---------|-------------|-------------|
 |  `bootstrapServers` |String| true | " " (empty string) | A comma-separated 
list of host and port pairs for establishing the initial connection to the 
Kafka cluster. |
+|  `securityProtocol` |String| false | " " (empty string) | The protocol used 
to communicate with Kafka brokers. |
+|  `saslMechanism` |String| false | " " (empty string) | The SASL mechanism 
used for Kafka client connections. |
+|  `saslJaasConfig` |String| false | " " (empty string) | The JAAS login 
context parameters for SASL connections in the format used by JAAS 
configuration files. |
+|  `sslEnabledProtocols` |String| false | " " (empty string) | The list of 
protocols enabled for SSL connections. |
+|  `sslEndpointIdentificationAlgorithm` |String| false | " " (empty string) | 
The endpoint identification algorithm to validate server hostnames using a 
server certificate. |
+|  `sslTruststoreLocation` |String| false | " " (empty string) | The location 
of the trust store file. |
+|  `sslTruststorePassword` |String| false | " " (empty string) | The password 
of the trust store file. |
 | `groupId` |String| true | " " (empty string) | A unique string that 
identifies the group of consumer processes to which this consumer belongs. |
 | `fetchMinBytes` | long|false | 1 | The minimum byte expected for each fetch 
response. |
 | `autoCommitEnabled` | boolean |false | true | If set to true, the consumer's 
offset is periodically committed in the background.<br /><br /> This committed 
offset is used when the process fails as the position from which a new consumer 
begins. |
@@ -56,7 +63,7 @@ In this case, you can have a Pulsar topic with the following 
properties:
 - KeySchema: the schema detected from `keyDeserializationClass`
 - ValueSchema: the schema detected from `valueDeserializationClass`
 
-Topic compaction and partition routing use the Pulsar key, that contains the 
Kafka key, and so they are driven by the same value that you have on Kafka.
+Topic compaction and partition routing use the Pulsar key, which contains the 
Kafka key, and so they are driven by the same value that you have on Kafka.
 
 When you consume data from Pulsar topics, you can use the `KeyValue` schema. 
In this way, you can decode the data properly.
 If you want to access the raw key, you can use the `Message#getKeyBytes()` API.
@@ -107,8 +114,7 @@ This example describes how to use the Kafka source 
connector to feed data from K
 #### Steps
 
 1. Download and start the Confluent Platform.
-
-For details, see the 
[documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp)
 to install the Kafka service locally.
+   For details, see the 
[documentation](https://docs.confluent.io/platform/current/quickstart/ce-docker-quickstart.html#step-1-download-and-start-cp)
 to install the Kafka service locally.
 
 2. Pull a Pulsar image and start Pulsar in standalone mode.
 

Reply via email to