This is an automated email from the ASF dual-hosted git repository.

urfree pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/pulsar-site.git


The following commit(s) were added to refs/heads/main by this push:
     new d25f67f8bd9 Docs sync done from apache/pulsar(#27186a1)
d25f67f8bd9 is described below

commit d25f67f8bd98a2031114cd6b8430fd7e189177a1
Author: Pulsar Site Updater <[email protected]>
AuthorDate: Mon Nov 7 12:01:45 2022 +0000

    Docs sync done from apache/pulsar(#27186a1)
---
 site2/website-next/docs/client-libraries-cpp.md      | 18 +-----------------
 site2/website-next/docs/client-libraries-java.md     | 20 ++++----------------
 site2/website-next/docs/concepts-messaging.md        |  6 +++---
 site2/website-next/docs/io-debezium-source.md        |  7 ++-----
 site2/website-next/docs/tiered-storage-overview.md   | 15 ++++++++++-----
 .../version-2.10.x/concepts-messaging.md             | 17 +++++++++--------
 6 files changed, 29 insertions(+), 54 deletions(-)

diff --git a/site2/website-next/docs/client-libraries-cpp.md 
b/site2/website-next/docs/client-libraries-cpp.md
index 7547487d10d..a3ac9422543 100644
--- a/site2/website-next/docs/client-libraries-cpp.md
+++ b/site2/website-next/docs/client-libraries-cpp.md
@@ -9,7 +9,7 @@ import Tabs from '@theme/Tabs';
 import TabItem from '@theme/TabItem';
 ````
 
-You can use a Pulsar C++ client to create producers, consumers, and readers.
+You can use a Pulsar C++ client to create producers, consumers, and readers. 
For complete examples, refer to [C++ client 
examples](https://github.com/apache/pulsar-client-cpp/tree/main/examples).
 
 All the methods in producer, consumer, and reader of a C++ client are 
thread-safe. You can read the [API docs](/api/cpp) for the C++ client.
 
@@ -394,22 +394,6 @@ Consumer consumer;
 client.subscribe("my-topic", "my-sub", conf, consumer);
 ```
 
-## Enable authentication in connection URLs
-If you use TLS authentication when connecting to Pulsar, you need to add `ssl` 
in the connection URLs, and the default port is `6651`. The following is an 
example.
-
-```cpp
-ClientConfiguration config = ClientConfiguration();
-config.setUseTls(true);
-config.setTlsTrustCertsFilePath("/path/to/cacert.pem");
-config.setTlsAllowInsecureConnection(false);
-config.setAuth(pulsar::AuthTls::create(
-            "/path/to/client-cert.pem", "/path/to/client-key.pem"););
-
-Client client("pulsar+ssl://my-broker.com:6651", config);
-```
-
-For complete examples, refer to [C++ client 
examples](https://github.com/apache/pulsar-client-cpp/tree/main/examples).
-
 ## Schema
 
 To work with [Pulsar schema](schema-overview.md) using C++ clients, see 
[Schema - Get started](schema-get-started.md). For specific schema types that 
C++ clients support, see 
[code](https://github.com/apache/pulsar-client-cpp/blob/main/include/pulsar/Schema.h).
\ No newline at end of file
diff --git a/site2/website-next/docs/client-libraries-java.md 
b/site2/website-next/docs/client-libraries-java.md
index 35339c776c2..7127368cf24 100644
--- a/site2/website-next/docs/client-libraries-java.md
+++ b/site2/website-next/docs/client-libraries-java.md
@@ -262,7 +262,6 @@ Mode     | Description
 The following is an example:
 
 ```java
-
 String pulsarBrokerRootUrl = "pulsar://localhost:6650";
 String topic = "persistent://my-tenant/my-namespace/my-topic";
 
@@ -272,7 +271,6 @@ Producer<byte[]> producer = pulsarClient.newProducer()
         .messageRoutingMode(MessageRoutingMode.SinglePartition)
         .create();
 producer.send("Partitioned topic message".getBytes());
-
 ```
 
 #### Custom message router
@@ -280,29 +278,24 @@ producer.send("Partitioned topic message".getBytes());
 To use a custom message router, you need to provide an implementation of the 
{@inject: 
javadoc:MessageRouter:/client/org/apache/pulsar/client/api/MessageRouter} 
interface, which has just one `choosePartition` method:
 
 ```java
-
 public interface MessageRouter extends Serializable {
     int choosePartition(Message msg);
 }
-
 ```
 
 The following router routes every message to partition 10:
 
 ```java
-
 public class AlwaysTenRouter implements MessageRouter {
     public int choosePartition(Message msg) {
         return 10;
     }
 }
-
 ```
 
 With that implementation, you can send messages to partitioned topics as below.
 
 ```java
-
 String pulsarBrokerRootUrl = "pulsar://localhost:6650";
 String topic = "persistent://my-tenant/my-cluster-my-namespace/my-topic";
 
@@ -312,14 +305,12 @@ Producer<byte[]> producer = pulsarClient.newProducer()
         .messageRouter(new AlwaysTenRouter())
         .create();
 producer.send("Partitioned topic message".getBytes());
-
 ```
 
 #### How to choose partitions when using a key
 If a message has a key, it supersedes the round robin routing policy. The 
following example illustrates how to choose the partition when using a key.
 
 ```java
-
 // If the message has a key, it supersedes the round robin routing policy
         if (msg.hasKey()) {
             return signSafeMod(hash.makeHash(msg.getKey()), 
topicMetadata.numPartitions());
@@ -331,7 +322,6 @@ If a message has a key, it supersedes the round robin 
routing policy. The follow
         } else {
             return signSafeMod(PARTITION_INDEX_UPDATER.getAndIncrement(this), 
topicMetadata.numPartitions());
         }
-
 ```
 
 ### Async send
@@ -387,17 +377,16 @@ To enable chunking, you need to disable batching 
(`enableBatching`=`false`) conc
 
 ### Intercept messages
 
-`ProducerInterceptor`s intercept and possibly mutate messages received by the 
producer before they are published to the brokers.
+`ProducerInterceptor` intercepts and possibly mutates messages received by the 
producer before they are published to the brokers.
 
 The interface has three main events:
 * `eligible` checks if the interceptor can be applied to the message.
 * `beforeSend` is triggered before the producer sends the message to the 
broker. You can modify messages within this event.
 * `onSendAcknowledgement` is triggered when the message is acknowledged by the 
broker or the sending failed.
 
-To intercept messages, you can add one or multiple `ProducerInterceptor`s when 
creating a `Producer` as follows.
+To intercept messages, you can add a `ProducerInterceptor` or multiple ones 
when creating a `Producer` as follows.
 
 ```java
-
 Producer<byte[]> producer = client.newProducer()
         .topic(topic)
         .intercept(new ProducerInterceptor {
@@ -417,12 +406,11 @@ Producer<byte[]> producer = client.newProducer()
                        }
         })
         .create();
-
 ```
 
 :::note
 
-If you are using multiple interceptors, they apply in the order they are 
passed to the `intercept` method.
+Multiple interceptors apply in the order they are passed to the `intercept` 
method.
 
 :::
 
@@ -1171,7 +1159,7 @@ The producer above is equivalent to a `Producer<byte[]>` 
(in fact, you should *a
 
 ## Authentication
 
-Pulsar currently supports the following authentication mechansims:
+Pulsar Java clients currently support the following authentication mechansims:
 * 
[TLS](security-tls-authentication.md#configure-tls-authentication-in-pulsar-clients)
 * [JWT](security-jwt.md#configure-jwt-authentication-in-pulsar-clients)
 * 
[Athenz](security-athenz.md#configure-athenz-authentication-in-pulsar-clients)
diff --git a/site2/website-next/docs/concepts-messaging.md 
b/site2/website-next/docs/concepts-messaging.md
index a115b96ce0e..4b89b4251e8 100644
--- a/site2/website-next/docs/concepts-messaging.md
+++ b/site2/website-next/docs/concepts-messaging.md
@@ -357,7 +357,7 @@ consumer.acknowledge(message);
 
 ### Retry letter topic
 
-The retry letter topic allows you to store the messages that failed to be 
consumed and retry consuming them later. With this method, you can customize 
the interval at which the messages are redelivered. Consumers on the original 
topic are automatically subscribed to the retry letter topic as well. Once the 
maximum number of retries has been reached, the unconsumed messages are moved 
to a [dead letter topic](#dead-letter-topic) for manual processing.
+Retry letter topic allows you to store the messages that failed to be consumed 
and retry consuming them later. With this method, you can customize the 
interval at which the messages are redelivered. Consumers on the original topic 
are automatically subscribed to the retry letter topic as well. Once the 
maximum number of retries has been reached, the unconsumed messages are moved 
to a [dead letter topic](#dead-letter-topic) for manual processing. The 
functionality of a retry letter topic  [...]
 
 The diagram below illustrates the concept of the retry letter topic.
 ![](/assets/retry-letter-topic.svg)
@@ -443,7 +443,7 @@ consumer.reconsumeLater(msg, customProperties, 3, 
TimeUnit.SECONDS);
 
 ### Dead letter topic
 
-Dead letter topic allows you to continue message consumption even when some 
messages are not consumed successfully. The messages that have failed to be 
consumed are stored in a specific topic, which is called the dead letter topic. 
You can decide how to handle the messages in the dead letter topic.
+Dead letter topic allows you to continue message consumption even when some 
messages are not consumed successfully. The messages that have failed to be 
consumed are stored in a specific topic, which is called the dead letter topic. 
The functionality of a dead letter topic is implemented by consumers. You can 
decide how to handle the messages in the dead letter topic. 
 
 Enable dead letter topic in a Java client using the default dead letter topic.
 
@@ -570,7 +570,7 @@ In the *Failover* type, multiple consumers can attach to 
the same subscription.
 
 For example, a partitioned topic has 3 partitions, and 15 consumers. Each 
partition will have 1 active consumer and 4 stand-by consumers.
 
-In the diagram below, **Consumer A** is the master consumer while **Consumer 
B** would be the next consumer in line to receive messages if **Consumer B** is 
disconnected.
+In the diagram below, **Consumer A** is the master consumer while **Consumer 
B** would be the next consumer in line to receive messages if **Consumer A** is 
disconnected.
 
 ![Failover subscriptions](/assets/pulsar-failover-subscriptions.svg)
 
diff --git a/site2/website-next/docs/io-debezium-source.md 
b/site2/website-next/docs/io-debezium-source.md
index c4ce2c2f63e..a68dbcb53ab 100644
--- a/site2/website-next/docs/io-debezium-source.md
+++ b/site2/website-next/docs/io-debezium-source.md
@@ -26,9 +26,9 @@ The configuration of the Debezium source connector has the 
following properties.
 | `database.history.pulsar.topic` | true | null | The name of the database 
history topic where the connector writes and recovers DDL statements. <br /><br 
/>**Note: this topic is for internal use only and should not be used by 
consumers.** |
 | `database.history.pulsar.service.url` | true | null | Pulsar cluster service 
URL for history topic. |
 | `offset.storage.topic` | true | null | Record the last committed offsets 
that the connector successfully completes. |
-| `json-with-envelope` | false | false | Present the message only consist of 
payload. |
+| `json-with-envelope` | false | false | Present the message that only 
consists of payload. |
 | `database.history.pulsar.reader.config` | false | null | The configs of the 
reader for the database schema history topic, in the form of a JSON string with 
key-value pairs. |
-| `offset.storage.reader.config` | false | null | The configs of the reader 
for the kafka connector offsets topic, in the form of a JSON string with 
key-value pairs. | 
+| `offset.storage.reader.config` | false | null | The configs of the reader 
for the kafka connector offsets topic, in the form of a JSON string with 
key-value pairs. |
 
 ### Converter Options
 
@@ -57,7 +57,6 @@ Schema.AUTO_CONSUME(), KeyValueEncodingType.SEPARATED)`, and 
the message consist
 The Debezium Connector exposes `database.history.pulsar.reader.config` and 
`offset.storage.reader.config` to configure the reader of database schema 
history topic and the Kafka connector offsets topic. For example, it can be 
used to configure the subscription name and other reader configurations. You 
can find the available configurations at 
[ReaderConfigurationData](https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/conf/ReaderConfigura
 [...]
 
 For example, to configure the subscription name for both Readers, you can add 
the following configuration:
-
 * JSON
 
    ```json
@@ -72,11 +71,9 @@ For example, to configure the subscription name for both 
Readers, you can add th
 * YAML
 
    ```yaml
-   
    configs:
       database.history.pulsar.reader.config: 
"{\"subscriptionName\":\"history-reader\"}"
       offset.storage.reader.config: "{\"subscriptionName\":\"offset-reader\"}"
-
    ```
 
 ## Example of MySQL
diff --git a/site2/website-next/docs/tiered-storage-overview.md 
b/site2/website-next/docs/tiered-storage-overview.md
index a317f093130..d0afd3689d3 100644
--- a/site2/website-next/docs/tiered-storage-overview.md
+++ b/site2/website-next/docs/tiered-storage-overview.md
@@ -6,7 +6,7 @@ sidebar_label: "Overview"
 
 Pulsar's **Tiered Storage** feature allows older backlog data to be moved from 
BookKeeper to long-term and cheaper storage, while still allowing clients to 
access the backlog as if nothing has changed.
 
-* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support 
[Amazon S3](https://aws.amazon.com/s3/), [GCS (Google Cloud 
Storage)](https://cloud.google.com/storage/), 
[Azure](https://azure.microsoft.com/en-us/services/storage/blobs/) and [Aliyun 
OSS](https://www.aliyun.com/product/oss) for long term storage.
+* Tiered storage uses [Apache jclouds](https://jclouds.apache.org) to support 
[Amazon S3](https://aws.amazon.com/s3/), [GCS (Google Cloud 
Storage)](https://cloud.google.com/storage/), 
[Azure](https://azure.microsoft.com/en-us/services/storage/blobs/) and [Aliyun 
OSS](https://www.aliyun.com/product/oss) for long-term storage.
   * Read how to [Use AWS S3 offloader with Pulsar](tiered-storage-aws.md);
   * Read how to [Use GCS offloader with Pulsar](tiered-storage-gcs.md);
   * Read how to [Use Azure BlobStore offloader with 
Pulsar](tiered-storage-azure.md);
@@ -15,6 +15,13 @@ Pulsar's **Tiered Storage** feature allows older backlog 
data to be moved from B
 * Tiered storage uses [Apache Hadoop](http://hadoop.apache.org/) to support 
filesystems for long-term storage.
   * Read how to [Use filesystem offloader with 
Pulsar](tiered-storage-filesystem.md).
 
+:::tip
+
+The [AWS S3 offloader](tiered-storage-aws.md) registers specific AWS metadata, 
such as regions and service URLs and requests bucket location before performing 
any operations. If you cannot access the Amazon service, you can use the [S3 
offloader](tiered-storage-s3.md) instead since it is an S3 compatible API 
without the metadata.
+
+:::
+
+
 ## When to use tiered storage?
 
 Tiered storage should be used when you have a topic for which you want to keep 
a very long backlog for a long time.
@@ -57,12 +64,10 @@ A topic in Pulsar is backed by a **log**, known as a 
**managed ledger**. This lo
 
 The tiered storage offloading mechanism takes advantage of the 
segment-oriented architecture. When offloading is requested, the segments of 
the log are copied one by one to tiered storage. All segments of the log (apart 
from the current segment) written to tiered storage can be offloaded.
 
-Data written to BookKeeper is replicated to 3 physical machines by default. 
However, once a segment is sealed in BookKeeper, it becomes immutable and can 
be copied to long-term storage. Long-term storage can achieve cost savings by 
using mechanisms such as [Reed-Solomon error 
correction](https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction)
 to require fewer physical copies of data.
+Data written to BookKeeper is replicated to 3 physical machines by default. 
However, once a segment is sealed in BookKeeper, it becomes immutable and can 
be copied to long-term storage. Long-term storage has the potential to achieve 
significant cost savings.
 
 Before offloading ledgers to long-term storage, you need to configure buckets, 
credentials, and other properties for the cloud storage service. Additionally, 
Pulsar uses multi-part objects to upload the segment data and brokers may crash 
while uploading the data. It is recommended that you add a life cycle rule for 
your bucket to expire incomplete multi-part upload after a day or two days to 
avoid getting charged for incomplete uploads. Moreover, you can trigger the 
offloading operation  [...]
 
 After offloading ledgers to long-term storage, you can still query data in the 
offloaded ledgers with Pulsar SQL.
 
-For more information about tiered storage for Pulsar topics, see 
[here](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics).
-
-For more information about offload metrics, see 
[here](reference-metrics.md#offload-metrics).
+For more information about tiered storage for Pulsar topics, see 
[PIP-17](https://github.com/apache/pulsar/wiki/PIP-17:-Tiered-storage-for-Pulsar-topics)
 and [offload metrics](reference-metrics.md#offload-metrics).
\ No newline at end of file
diff --git 
a/site2/website-next/versioned_docs/version-2.10.x/concepts-messaging.md 
b/site2/website-next/versioned_docs/version-2.10.x/concepts-messaging.md
index 84e31b8f689..adb7687df96 100644
--- a/site2/website-next/versioned_docs/version-2.10.x/concepts-messaging.md
+++ b/site2/website-next/versioned_docs/version-2.10.x/concepts-messaging.md
@@ -263,14 +263,15 @@ The message redelivery behavior should be as follows.
 
 Redelivery count | Redelivery delay
 :--------------------|:-----------
-1 | 10 + 1 seconds
-2 | 10 + 2 seconds
-3 | 10 + 4 seconds
-4 | 10 + 8 seconds
-5 | 10 + 16 seconds
-6 | 10 + 32 seconds
-7 | 10 + 60 seconds
-8 | 10 + 60 seconds
+1 | 1 seconds
+2 | 2 seconds
+3 | 4 seconds
+4 | 8 seconds
+5 | 16 seconds
+6 | 32 seconds
+7 | 60 seconds
+8 | 60 seconds
+
 :::note
 
 If batching is enabled, all messages in one batch are redelivered to the 
consumer.

Reply via email to