This is an automated email from the ASF dual-hosted git repository.
junma pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git
The following commit(s) were added to refs/heads/master by this push:
new 284001a7b7b [fix][doc] Sync recent changes on versioned docs (#17130)
284001a7b7b is described below
commit 284001a7b7bc9212d79bc7097b5b9c61658a8ca7
Author: momo-jun <[email protected]>
AuthorDate: Wed Sep 7 09:05:10 2022 +0800
[fix][doc] Sync recent changes on versioned docs (#17130)
* Sync recent changes from #17030, #17039, #16315, and #17057
* fix #17119
* minor updates
* add link of release notes to navigation
* fix
* update release process as per PIP-190
* minor fix
* minor fix
* Update release-process.md
---
site2/docs/client-libraries-java.md | 2 +-
site2/docs/cookbooks-compaction.md | 28 +++++++-------
site2/docs/functions-overview.md | 2 +-
site2/docs/reference-metrics.md | 4 +-
site2/website/sidebars.json | 7 +++-
.../version-2.10.x/client-libraries-java.md | 6 ++-
.../version-2.10.x/cookbooks-compaction.md | 32 +++++++++-------
.../version-2.10.x/io-debezium-source.md | 4 +-
.../version-2.10.x/reference-configuration.md | 2 +-
.../security-policy-and-supported-versions.md | 10 +++--
.../version-2.10.x/tiered-storage-filesystem.md | 8 ++--
.../version-2.8.x/client-libraries-java.md | 2 +-
.../version-2.8.x/cookbooks-compaction.md | 32 +++++++++-------
.../version-2.8.x/developing-binary-protocol.md | 3 +-
.../version-2.8.x/reference-configuration.md | 2 +-
.../version-2.8.x/tiered-storage-filesystem.md | 8 ++--
.../version-2.9.x/cookbooks-compaction.md | 32 +++++++++-------
.../version-2.9.x/io-debezium-source.md | 4 +-
.../version-2.9.x/reference-configuration.md | 2 +-
.../version-2.9.x/tiered-storage-filesystem.md | 6 +--
wiki/release/release-process.md | 44 ++++++++--------------
21 files changed, 129 insertions(+), 111 deletions(-)
diff --git a/site2/docs/client-libraries-java.md
b/site2/docs/client-libraries-java.md
index 30f88956079..82568086b8d 100644
--- a/site2/docs/client-libraries-java.md
+++ b/site2/docs/client-libraries-java.md
@@ -1431,7 +1431,7 @@ With TableView, Pulsar clients can fetch all the message
updates from a topic an
:::note
-Each TableView uses one Reader instance per partition, and reads the topic
starting from the compacted view by default. It is highly recommended to enable
automatic compaction by [configuring the topic compaction
policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically)
for the given topic or namespace. More frequent compaction results in shorter
startup times because less data is replayed to reconstruct the TableView of the
topic.
+Each TableView uses one Reader instance per partition, and reads the topic
starting from the compacted view by default. It is highly recommended to enable
automatic compaction by [configuring the topic compaction
policies](cookbooks-compaction.md#configure-compaction-to-run-automatically)
for the given topic or namespace. More frequent compaction results in shorter
startup times because less data is replayed to reconstruct the TableView of the
topic.
:::
diff --git a/site2/docs/cookbooks-compaction.md
b/site2/docs/cookbooks-compaction.md
index a20ebd188ad..95403af461d 100644
--- a/site2/docs/cookbooks-compaction.md
+++ b/site2/docs/cookbooks-compaction.md
@@ -9,8 +9,8 @@ Pulsar's [topic
compaction](concepts-topic-compaction.md#compaction) feature ena
To use compaction:
* You need to give messages keys, as topic compaction in Pulsar takes place on
a *per-key basis* (i.e. messages are compacted based on their key). For a stock
ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the
key (more on this [below](#when-should-i-use-compacted-topics)). Messages
without keys will be left alone by the compaction process.
-* Compaction can be configured to run
[automatically](#configuring-compaction-to-run-automatically), or you can
manually [trigger](#triggering-compaction-manually) compaction using the Pulsar
administrative API.
-* Your consumers must be [configured](#consumer-configuration) to read from
compacted topics ([Java consumers](#java), for example, have a `readCompacted`
setting that must be set to `true`). If this configuration is not set,
consumers will still be able to read from the non-compacted topic.
+* Compaction can be configured to run
[automatically](#configure-compaction-to-run-automatically), or you can
manually [trigger](#trigger-compaction-manually) compaction using the Pulsar
administrative API.
+* Your consumers must be [configured](#configure-consumers) to read from
compacted topics (Java consumers, for example, have a `readCompacted` setting
that must be set to `true`). If this configuration is not set, consumers will
still be able to read from the non-compacted topic.
> Compaction only works on messages that have keys (as in the stock ticker
> example the stock symbol serves as the key for each message). Keys can thus
> be thought of as the axis along which compaction is applied. Messages that
> don't have keys are simply ignored by compaction.
@@ -22,25 +22,25 @@ The classic example of a topic that could benefit from
compaction would be a sto
* They can read from the "original," non-compacted topic in case they need
access to "historical" values, i.e. the entirety of the topic's messages.
* They can read from the compacted topic if they only want to see the most
up-to-date messages.
-Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#consumer-con [...]
+Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#configure-co [...]
> One of the benefits of compaction in Pulsar is that you aren't forced to
> choose between compacted and non-compacted topics, as the compaction process
> leaves the original topic as-is and essentially adds an alternate topic. In
> other words, you can run compaction on a topic and consumers that need
> access to the non-compacted version of the topic will not be adversely
> affected.
-## Configuring compaction to run automatically
+## Configure compaction to run automatically
-Tenant administrators can configure a policy for compaction at the namespace
level. The policy specifies how large the topic backlog can grow before
compaction is triggered.
+Compaction policy specifies how large the topic backlog can grow before
compaction is triggered.
-For example, to trigger compaction when the backlog reaches 100MB:
+Tenant administrators can configure a compaction policy at namespace or topic
levels. Configuring the compaction policy at the namespace level applies to all
topics within that namespace.
+
+For example, to trigger compaction in a namespace when the backlog reaches
100MB:
```bash
bin/pulsar-admin namespaces set-compaction-threshold \
--threshold 100M my-tenant/my-namespace
```
-Configuring the compaction threshold on a namespace will apply to all topics
within that namespace.
-
-## Triggering compaction manually
+## Trigger compaction manually
To run compaction on a topic, you need to use the [`topics
compact`](/tools/pulsar-admin/) command for the
[`pulsar-admin`](/tools/pulsar-admin/) CLI tool. Here's an example:
@@ -70,15 +70,15 @@ bin/pulsar compact-topic \
--topic persistent://my-tenant/my-namespace/my-topic
```
-#### When should I trigger compaction?
+:::tip
-How often you [trigger compaction](#triggering-compaction-manually) will vary
widely based on the use case. If you want a compacted topic to be extremely
speedy on read, then you should run compaction fairly frequently.
+The frequency to trigger topic compaction varies widely based on use cases. If
you want a compacted topic to be extremely speedy on read, then you need to run
compaction fairly frequently.
-## Consumer configuration
+:::
-Pulsar consumers and readers need to be configured to read from compacted
topics. The sections below show you how to enable compacted topic reads for
Pulsar's language clients.
+## Configure consumers
-### Java
+Pulsar consumers and readers need to be configured to read from compacted
topics. The section below introduces how to enable compacted topic reads for
Java clients.
To read from a compacted topic using a Java consumer, the `readCompacted`
parameter must be set to `true`. Here's an example consumer for a compacted
topic:
diff --git a/site2/docs/functions-overview.md b/site2/docs/functions-overview.md
index cfc39081fb3..e017695ddb9 100644
--- a/site2/docs/functions-overview.md
+++ b/site2/docs/functions-overview.md
@@ -8,7 +8,7 @@ This section introduces the following content:
* [What are Pulsar Functions](#what-are-pulsar-functions)
* [Why use Pulsar Functions](#why-use-pulsar-functions)
* [Use cases](#use-cases)
-* [User flow](#user-flow)
+* [What's next?](#whats-next)
## What are Pulsar Functions
diff --git a/site2/docs/reference-metrics.md b/site2/docs/reference-metrics.md
index c70b444cb44..c4c7f5c771c 100644
--- a/site2/docs/reference-metrics.md
+++ b/site2/docs/reference-metrics.md
@@ -1,7 +1,7 @@
---
id: reference-metrics
-title: Pulsar Metrics
-sidebar_label: "Pulsar Metrics"
+title: Pulsar metrics
+sidebar_label: "Pulsar metrics"
---
diff --git a/site2/website/sidebars.json b/site2/website/sidebars.json
index 7c51ae83678..5879f3b36b7 100644
--- a/site2/website/sidebars.json
+++ b/site2/website/sidebars.json
@@ -343,7 +343,12 @@
"reference-cli-tools",
"reference-configuration",
"reference-metrics",
- "reference-rest-api-overview"
+ "reference-rest-api-overview",
+ {
+ "type": "link",
+ "href": "/release-notes/",
+ "label": "Release notes"
+ }
]
}
]
diff --git
a/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md
b/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md
index 5dacf669e76..38d26f70ee8 100644
--- a/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md
+++ b/site2/website/versioned_docs/version-2.10.x/client-libraries-java.md
@@ -1390,7 +1390,11 @@ The TableView interface serves an encapsulated access
pattern, providing a conti
With TableView, Pulsar clients can fetch all the message updates from a topic
and construct a map with the latest values of each key. These values can then
be used to build a local cache of data. In addition, you can register consumers
with the TableView by specifying a listener to perform a scan of the map and
then receive notifications when new messages are received. Consequently, event
handling can be triggered to serve use cases, such as event-driven applications
and message monitoring.
-> **Note:** Each TableView uses one Reader instance per partition, and reads
the topic starting from the compacted view by default. It is highly recommended
to enable automatic compaction by [configuring the topic compaction
policies](cookbooks-compaction.md#configuring-compaction-to-run-automatically)
for the given topic or namespace. More frequent compaction results in shorter
startup times because less data is replayed to reconstruct the TableView of the
topic.
+:::note
+
+Each TableView uses one Reader instance per partition, and reads the topic
starting from the compacted view by default. It is highly recommended to enable
automatic compaction by [configuring the topic compaction
policies](cookbooks-compaction.md#configure-compaction-to-run-automatically)
for the given topic or namespace. More frequent compaction results in shorter
startup times because less data is replayed to reconstruct the TableView of the
topic.
+
+:::
The following figure illustrates the dynamic construction of a TableView
updated with newer values of each key.

diff --git
a/site2/website/versioned_docs/version-2.10.x/cookbooks-compaction.md
b/site2/website/versioned_docs/version-2.10.x/cookbooks-compaction.md
index dfa31472724..8e4ed064438 100644
--- a/site2/website/versioned_docs/version-2.10.x/cookbooks-compaction.md
+++ b/site2/website/versioned_docs/version-2.10.x/cookbooks-compaction.md
@@ -10,8 +10,8 @@ Pulsar's [topic
compaction](concepts-topic-compaction.md#compaction) feature ena
To use compaction:
* You need to give messages keys, as topic compaction in Pulsar takes place on
a *per-key basis* (i.e. messages are compacted based on their key). For a stock
ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the
key (more on this [below](#when-should-i-use-compacted-topics)). Messages
without keys will be left alone by the compaction process.
-* Compaction can be configured to run
[automatically](#configuring-compaction-to-run-automatically), or you can
manually [trigger](#triggering-compaction-manually) compaction using the Pulsar
administrative API.
-* Your consumers must be [configured](#consumer-configuration) to read from
compacted topics ([Java consumers](#java), for example, have a `readCompacted`
setting that must be set to `true`). If this configuration is not set,
consumers will still be able to read from the non-compacted topic.
+* Compaction can be configured to run
[automatically](#configure-compaction-to-run-automatically), or you can
manually [trigger](#trigger-compaction-manually) compaction using the Pulsar
administrative API.
+* Your consumers must be [configured](#configure-consumers) to read from
compacted topics (Java consumers, for example, have a `readCompacted` setting
that must be set to `true`). If this configuration is not set, consumers will
still be able to read from the non-compacted topic.
> Compaction only works on messages that have keys (as in the stock ticker
> example the stock symbol serves as the key for each message). Keys can thus
> be thought of as the axis along which compaction is applied. Messages that
> don't have keys are simply ignored by compaction.
@@ -23,16 +23,18 @@ The classic example of a topic that could benefit from
compaction would be a sto
* They can read from the "original," non-compacted topic in case they need
access to "historical" values, i.e. the entirety of the topic's messages.
* They can read from the compacted topic if they only want to see the most
up-to-date messages.
-Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#consumer-con [...]
+Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#configure-co [...]
> One of the benefits of compaction in Pulsar is that you aren't forced to
> choose between compacted and non-compacted topics, as the compaction process
> leaves the original topic as-is and essentially adds an alternate topic. In
> other words, you can run compaction on a topic and consumers that need
> access to the non-compacted version of the topic will not be adversely
> affected.
-## Configuring compaction to run automatically
+## Configure compaction to run automatically
-Tenant administrators can configure a policy for compaction at the namespace
level. The policy specifies how large the topic backlog can grow before
compaction is triggered.
+Compaction policy specifies how large the topic backlog can grow before
compaction is triggered.
-For example, to trigger compaction when the backlog reaches 100MB:
+Tenant administrators can configure a compaction policy at namespace or topic
levels. Configuring the compaction policy at the namespace level applies to all
topics within that namespace.
+
+For example, to trigger compaction in a namespace when the backlog reaches
100MB:
```bash
@@ -41,9 +43,13 @@ $ bin/pulsar-admin namespaces set-compaction-threshold \
```
-Configuring the compaction threshold on a namespace will apply to all topics
within that namespace.
+:::note
+
+To configure the compaction policy at the topic level, you need to enable
[topic-level
policy](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies)
first.
+
+:::
-## Triggering compaction manually
+## Trigger compaction manually
In order to run compaction on a topic, you need to use the [`topics
compact`](reference-pulsar-admin.md#topics-compact) command for the
[`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example:
@@ -79,15 +85,15 @@ $ bin/pulsar compact-topic \
```
-#### When should I trigger compaction?
+:::tip
-How often you [trigger compaction](#triggering-compaction-manually) will vary
widely based on the use case. If you want a compacted topic to be extremely
speedy on read, then you should run compaction fairly frequently.
+The frequency to trigger topic compaction varies widely based on use cases. If
you want a compacted topic to be extremely speedy on read, then you need to run
compaction fairly frequently.
-## Consumer configuration
+:::
-Pulsar consumers and readers need to be configured to read from compacted
topics. The sections below show you how to enable compacted topic reads for
Pulsar's language clients.
+## Configure consumers
-### Java
+Pulsar consumers and readers need to be configured to read from compacted
topics. The section below introduces how to enable compacted topic reads for
Java clients.
In order to read from a compacted topic using a Java consumer, the
`readCompacted` parameter must be set to `true`. Here's an example consumer for
a compacted topic:
diff --git a/site2/website/versioned_docs/version-2.10.x/io-debezium-source.md
b/site2/website/versioned_docs/version-2.10.x/io-debezium-source.md
index aedbd18dce4..b487ab6477e 100644
--- a/site2/website/versioned_docs/version-2.10.x/io-debezium-source.md
+++ b/site2/website/versioned_docs/version-2.10.x/io-debezium-source.md
@@ -25,10 +25,10 @@ The configuration of Debezium source connector has the
following properties.
| `key.converter` | true | null | The converter provided by Kafka Connect to
convert record key. |
| `value.converter` | true | null | The converter provided by Kafka Connect to
convert record value. |
| `database.history` | true | null | The name of the database history class. |
-| `database.history.pulsar.topic` | true | null | The name of the database
history topic where the connector writes and recovers DDL statements. <br /><br
/>**Note: this topic is for internal use only and should not be used by
consumers.** |
+| `database.history.pulsar.topic` | true | null | The name of the database
history topic where the connector writes and recovers DDL statements. <br /><br
/>**Note: This topic is for internal use only and should not be used by
consumers.** |
| `database.history.pulsar.service.url` | true | null | Pulsar cluster service
URL for history topic. |
| `offset.storage.topic` | true | null | Record the last committed offsets
that the connector successfully completes. |
-| `json-with-envelope` | false | false | Present the message only consist of
payload.
+| `json-with-envelope` | false | false | Present the message that only
consists of payload.|
| `database.history.pulsar.reader.config` | false | null | The configs of the
reader for the database schema history topic, in the form of a JSON string with
key-value pairs. <br />**Note:** This property is only available in 2.10.2 and
later versions. |
| `offset.storage.reader.config` | false | null | The configs of the reader
for the kafka connector offsets topic, in the form of a JSON string with
key-value pairs. <br />**Note:** This property is only available in 2.10.2 and
later versions.|
diff --git
a/site2/website/versioned_docs/version-2.10.x/reference-configuration.md
b/site2/website/versioned_docs/version-2.10.x/reference-configuration.md
index cfd6a794957..d21b890b99d 100644
--- a/site2/website/versioned_docs/version-2.10.x/reference-configuration.md
+++ b/site2/website/versioned_docs/version-2.10.x/reference-configuration.md
@@ -40,7 +40,7 @@ BookKeeper is a replicated log storage system that Pulsar
uses for durable stora
|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This
parameter is used by the ZooKeeper-based ledger manager as a root znode to
store all ledgers.|/ledgers|
|ledgerStorageClass|Ledger storage implementation
class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage|
|entryLogFilePreallocationEnabled|Enable or disable entry logger
preallocation|true|
-|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log
file will be created when the old one reaches the file size
limitation.|2147483648|
+|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log
file will be created when the old one reaches the file size
limitation.|1073741824|
|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose
remaining size percentage reaches below this threshold will be compacted in a
minor compaction. If set to less than zero, the minor compaction is
disabled.|0.2|
|minorCompactionInterval|Time interval to run minor compaction, in seconds. If
set to less than zero, the minor compaction is disabled. Note: should be
greater than gcWaitTime. |3600|
|majorCompactionThreshold|The threshold of major compaction. Entry log files
whose remaining size percentage reaches below this threshold will be compacted
in a major compaction. Those entry log files whose remaining size percentage is
still higher than the threshold will never be compacted. If set to less than
zero, the minor compaction is disabled.|0.5|
diff --git
a/site2/website/versioned_docs/version-2.10.x/security-policy-and-supported-versions.md
b/site2/website/versioned_docs/version-2.10.x/security-policy-and-supported-versions.md
index b7fe6f2e156..637147a5dc2 100644
---
a/site2/website/versioned_docs/version-2.10.x/security-policy-and-supported-versions.md
+++
b/site2/website/versioned_docs/version-2.10.x/security-policy-and-supported-versions.md
@@ -2,7 +2,6 @@
id: security-policy-and-supported-versions
title: Security Policy and Supported Versions
sidebar_label: "Security Policy and Supported Versions"
-original_id: security-policy-and-supported-versions
---
## Using Pulsar's Security Features
@@ -10,10 +9,13 @@ original_id: security-policy-and-supported-versions
You can find documentation on Pulsar's available security features and how to
use them here:
https://pulsar.apache.org/docs/en/security-overview/.
-## Security Vulnerability Announcements
+## Security Vulnerability Process
-The Pulsar community will announce security vulnerabilities and how to
mitigate them on the [[email protected]](mailto:[email protected]).
-For instructions on how to subscribe, please see
https://pulsar.apache.org/contact/.
+The Pulsar community follows the ASF [security vulnerability handling
process](https://apache.org/security/#vulnerability-handling).
+
+To report a new vulnerability you have discovered, please follow the [ASF
security vulnerability reporting
process](https://apache.org/security/#reporting-a-vulnerability). To report a
vulnerability for Pulsar, contact the [Apache Security
Team](https://www.apache.org/security/). When reporting a vulnerability to
[[email protected]](mailto:[email protected]), you can copy your email to
[[email protected]](mailto:[email protected]) to send your
report to the Apache Pul [...]
+
+It is the responsibility of the security vulnerability handling project team
(Apache Pulsar PMC in most cases) to make public security vulnerability
announcements. You can follow announcements on the
[[email protected]](mailto:[email protected]) mailing list. For
instructions on how to subscribe, please see https://pulsar.apache.org/contact/.
## Versioning Policy
diff --git
a/site2/website/versioned_docs/version-2.10.x/tiered-storage-filesystem.md
b/site2/website/versioned_docs/version-2.10.x/tiered-storage-filesystem.md
index fb39290ef8f..bb399b500cb 100644
--- a/site2/website/versioned_docs/version-2.10.x/tiered-storage-filesystem.md
+++ b/site2/website/versioned_docs/version-2.10.x/tiered-storage-filesystem.md
@@ -109,7 +109,7 @@ You can configure the filesystem offloader driver in the
`broker.conf` or `stand
`managedLedgerOffloadDriver` | Offloader driver name, which is
case-insensitive. | filesystem
`fileSystemURI` | Connection address, which is the URI to access the default
Hadoop distributed file system. | hdfs://127.0.0.1:9000
`offloadersDirectory` | Offloader directory | offloaders
- `fileSystemProfilePath` | Hadoop profile path. The configuration file is
stored in the Hadoop profile path. It contains various settings for Hadoop
performance tuning. | ../conf/filesystem_offload_core_site.xml
+ `fileSystemProfilePath` | Hadoop profile path. The configuration file is
stored in the Hadoop profile path. It contains various settings for Hadoop
performance tuning. | conf/filesystem_offload_core_site.xml
- **Optional** configurations are as below.
@@ -128,7 +128,7 @@ You can configure the filesystem offloader driver in the
`broker.conf` or `stand
|---|---|---
`managedLedgerOffloadDriver` | Offloader driver name, which is
case-insensitive. | filesystem
`offloadersDirectory` | Offloader directory | offloaders
- `fileSystemProfilePath` | NFS profile path. The configuration file is stored
in the NFS profile path. It contains various settings for performance tuning. |
../conf/filesystem_offload_core_site.xml
+ `fileSystemProfilePath` | NFS profile path. The configuration file is stored
in the NFS profile path. It contains various settings for performance tuning. |
conf/filesystem_offload_core_site.xml
- **Optional** configurations are as below.
@@ -370,7 +370,7 @@ Set the following configurations in the
`conf/standalone.conf` file.
managedLedgerOffloadDriver=filesystem
fileSystemURI=hdfs://127.0.0.1:9000
-fileSystemProfilePath=../conf/filesystem_offload_core_site.xml
+fileSystemProfilePath=conf/filesystem_offload_core_site.xml
```
@@ -421,7 +421,7 @@ As indicated in the [configuration](#configuration)
section, you need to configu
```conf
managedLedgerOffloadDriver=filesystem
- fileSystemProfilePath=../conf/filesystem_offload_core_site.xml
+ fileSystemProfilePath=conf/filesystem_offload_core_site.xml
```
diff --git
a/site2/website/versioned_docs/version-2.8.x/client-libraries-java.md
b/site2/website/versioned_docs/version-2.8.x/client-libraries-java.md
index 8c149ec7b71..d430e7ca2c0 100644
--- a/site2/website/versioned_docs/version-2.8.x/client-libraries-java.md
+++ b/site2/website/versioned_docs/version-2.8.x/client-libraries-java.md
@@ -128,7 +128,7 @@ long|`statsIntervalSeconds`|Interval between each stats
info<br /><br />Stats is
int|`numIoThreads`| The number of threads used for handling connections to
brokers | 1
int|`numListenerThreads`|The number of threads used for handling message
listeners. The listener thread pool is shared across all the consumers and
readers using the "listener" model to get messages. For a given consumer, the
listener is always invoked from the same thread to ensure ordering. If you want
multiple threads to process a single topic, you need to create a
[`shared`](https://pulsar.apache.org/docs/en/next/concepts-messaging/#shared)
subscription and multiple consumers for thi [...]
boolean|`useTcpNoDelay`|Whether to use TCP no-delay flag on the connection to
disable Nagle algorithm |true
-boolean |`useTls` |Whether to use TLS encryption on the connection| false
+boolean |`enableTls` |Whether to use TLS encryption on the connection. Note
that this parameter is **deprecated**. If you want to enable TLS, use
`pulsar+ssl://` in `serviceUrl` instead.| false
string | `tlsTrustCertsFilePath` |Path to the trusted TLS certificate file|None
boolean|`tlsAllowInsecureConnection`|Whether the Pulsar client accepts
untrusted TLS certificate from broker | false
boolean | `tlsHostnameVerificationEnable` | Whether to enable TLS hostname
verification|false
diff --git a/site2/website/versioned_docs/version-2.8.x/cookbooks-compaction.md
b/site2/website/versioned_docs/version-2.8.x/cookbooks-compaction.md
index dfa31472724..8e4ed064438 100644
--- a/site2/website/versioned_docs/version-2.8.x/cookbooks-compaction.md
+++ b/site2/website/versioned_docs/version-2.8.x/cookbooks-compaction.md
@@ -10,8 +10,8 @@ Pulsar's [topic
compaction](concepts-topic-compaction.md#compaction) feature ena
To use compaction:
* You need to give messages keys, as topic compaction in Pulsar takes place on
a *per-key basis* (i.e. messages are compacted based on their key). For a stock
ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the
key (more on this [below](#when-should-i-use-compacted-topics)). Messages
without keys will be left alone by the compaction process.
-* Compaction can be configured to run
[automatically](#configuring-compaction-to-run-automatically), or you can
manually [trigger](#triggering-compaction-manually) compaction using the Pulsar
administrative API.
-* Your consumers must be [configured](#consumer-configuration) to read from
compacted topics ([Java consumers](#java), for example, have a `readCompacted`
setting that must be set to `true`). If this configuration is not set,
consumers will still be able to read from the non-compacted topic.
+* Compaction can be configured to run
[automatically](#configure-compaction-to-run-automatically), or you can
manually [trigger](#trigger-compaction-manually) compaction using the Pulsar
administrative API.
+* Your consumers must be [configured](#configure-consumers) to read from
compacted topics (Java consumers, for example, have a `readCompacted` setting
that must be set to `true`). If this configuration is not set, consumers will
still be able to read from the non-compacted topic.
> Compaction only works on messages that have keys (as in the stock ticker
> example the stock symbol serves as the key for each message). Keys can thus
> be thought of as the axis along which compaction is applied. Messages that
> don't have keys are simply ignored by compaction.
@@ -23,16 +23,18 @@ The classic example of a topic that could benefit from
compaction would be a sto
* They can read from the "original," non-compacted topic in case they need
access to "historical" values, i.e. the entirety of the topic's messages.
* They can read from the compacted topic if they only want to see the most
up-to-date messages.
-Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#consumer-con [...]
+Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#configure-co [...]
> One of the benefits of compaction in Pulsar is that you aren't forced to
> choose between compacted and non-compacted topics, as the compaction process
> leaves the original topic as-is and essentially adds an alternate topic. In
> other words, you can run compaction on a topic and consumers that need
> access to the non-compacted version of the topic will not be adversely
> affected.
-## Configuring compaction to run automatically
+## Configure compaction to run automatically
-Tenant administrators can configure a policy for compaction at the namespace
level. The policy specifies how large the topic backlog can grow before
compaction is triggered.
+Compaction policy specifies how large the topic backlog can grow before
compaction is triggered.
-For example, to trigger compaction when the backlog reaches 100MB:
+Tenant administrators can configure a compaction policy at namespace or topic
levels. Configuring the compaction policy at the namespace level applies to all
topics within that namespace.
+
+For example, to trigger compaction in a namespace when the backlog reaches
100MB:
```bash
@@ -41,9 +43,13 @@ $ bin/pulsar-admin namespaces set-compaction-threshold \
```
-Configuring the compaction threshold on a namespace will apply to all topics
within that namespace.
+:::note
+
+To configure the compaction policy at the topic level, you need to enable
[topic-level
policy](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies)
first.
+
+:::
-## Triggering compaction manually
+## Trigger compaction manually
In order to run compaction on a topic, you need to use the [`topics
compact`](reference-pulsar-admin.md#topics-compact) command for the
[`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example:
@@ -79,15 +85,15 @@ $ bin/pulsar compact-topic \
```
-#### When should I trigger compaction?
+:::tip
-How often you [trigger compaction](#triggering-compaction-manually) will vary
widely based on the use case. If you want a compacted topic to be extremely
speedy on read, then you should run compaction fairly frequently.
+The frequency to trigger topic compaction varies widely based on use cases. If
you want a compacted topic to be extremely speedy on read, then you need to run
compaction fairly frequently.
-## Consumer configuration
+:::
-Pulsar consumers and readers need to be configured to read from compacted
topics. The sections below show you how to enable compacted topic reads for
Pulsar's language clients.
+## Configure consumers
-### Java
+Pulsar consumers and readers need to be configured to read from compacted
topics. The section below introduces how to enable compacted topic reads for
Java clients.
In order to read from a compacted topic using a Java consumer, the
`readCompacted` parameter must be set to `true`. Here's an example consumer for
a compacted topic:
diff --git
a/site2/website/versioned_docs/version-2.8.x/developing-binary-protocol.md
b/site2/website/versioned_docs/version-2.8.x/developing-binary-protocol.md
index 7a2a034b044..1f225c52df1 100644
--- a/site2/website/versioned_docs/version-2.8.x/developing-binary-protocol.md
+++ b/site2/website/versioned_docs/version-2.8.x/developing-binary-protocol.md
@@ -302,7 +302,8 @@ subscription is not already there, a new one will be
created.
:::note
-In 2.8.4 and later versions, if the client does not receive a response
indicating the success or failure of consumer creation, it first sends a
command to close the original consumer before sending a command to re-attempt
consumer creation.
+* Before creating or connecting a consumer, you need to perform [topic
lookup](#topic-lookup) first.
+* In 2.8.4 and later versions, if the client does not receive a response
indicating the success or failure of consumer creation, it first sends a
command to close the original consumer before sending a command to re-attempt
consumer creation.
:::
diff --git
a/site2/website/versioned_docs/version-2.8.x/reference-configuration.md
b/site2/website/versioned_docs/version-2.8.x/reference-configuration.md
index b58d3f23166..6ef75bac740 100644
--- a/site2/website/versioned_docs/version-2.8.x/reference-configuration.md
+++ b/site2/website/versioned_docs/version-2.8.x/reference-configuration.md
@@ -41,7 +41,7 @@ BookKeeper is a replicated log storage system that Pulsar
uses for durable stora
|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This
parameter is used by the ZooKeeper-based ledger manager as a root znode to
store all ledgers.|/ledgers|
|ledgerStorageClass|Ledger storage implementation
class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage|
|entryLogFilePreallocationEnabled|Enable or disable entry logger
preallocation|true|
-|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log
file will be created when the old one reaches the file size
limitation.|2147483648|
+|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log
file will be created when the old one reaches the file size
limitation.|1073741824|
|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose
remaining size percentage reaches below this threshold will be compacted in a
minor compaction. If set to less than zero, the minor compaction is
disabled.|0.2|
|minorCompactionInterval|Time interval to run minor compaction, in seconds. If
set to less than zero, the minor compaction is disabled. Note: should be
greater than gcWaitTime. |3600|
|majorCompactionThreshold|The threshold of major compaction. Entry log files
whose remaining size percentage reaches below this threshold will be compacted
in a major compaction. Those entry log files whose remaining size percentage is
still higher than the threshold will never be compacted. If set to less than
zero, the minor compaction is disabled.|0.5|
diff --git
a/site2/website/versioned_docs/version-2.8.x/tiered-storage-filesystem.md
b/site2/website/versioned_docs/version-2.8.x/tiered-storage-filesystem.md
index 4456b615afa..85a1644120f 100644
--- a/site2/website/versioned_docs/version-2.8.x/tiered-storage-filesystem.md
+++ b/site2/website/versioned_docs/version-2.8.x/tiered-storage-filesystem.md
@@ -109,7 +109,7 @@ You can configure the filesystem offloader driver in the
`broker.conf` or `stand
`managedLedgerOffloadDriver` | Offloader driver name, which is
case-insensitive. | filesystem
`fileSystemURI` | Connection address, which is the URI to access the default
Hadoop distributed file system. | hdfs://127.0.0.1:9000
`offloadersDirectory` | Offloader directory | offloaders
- `fileSystemProfilePath` | Hadoop profile path. The configuration file is
stored in the Hadoop profile path. It contains various settings for Hadoop
performance tuning. | ../conf/filesystem_offload_core_site.xml
+ `fileSystemProfilePath` | Hadoop profile path. The configuration file is
stored in the Hadoop profile path. It contains various settings for Hadoop
performance tuning. | conf/filesystem_offload_core_site.xml
- **Optional** configurations are as below.
@@ -127,7 +127,7 @@ You can configure the filesystem offloader driver in the
`broker.conf` or `stand
|---|---|---
`managedLedgerOffloadDriver` | Offloader driver name, which is
case-insensitive. | filesystem
`offloadersDirectory` | Offloader directory | offloaders
- `fileSystemProfilePath` | NFS profile path. The configuration file is stored
in the NFS profile path. It contains various settings for performance tuning. |
../conf/filesystem_offload_core_site.xml
+ `fileSystemProfilePath` | NFS profile path. The configuration file is stored
in the NFS profile path. It contains various settings for performance tuning. |
conf/filesystem_offload_core_site.xml
- **Optional** configurations are as below.
@@ -369,7 +369,7 @@ Set the following configurations in the
`conf/standalone.conf` file.
managedLedgerOffloadDriver=filesystem
fileSystemURI=hdfs://127.0.0.1:9000
-fileSystemProfilePath=../conf/filesystem_offload_core_site.xml
+fileSystemProfilePath=conf/filesystem_offload_core_site.xml
```
@@ -420,7 +420,7 @@ As indicated in the [configuration](#configuration)
section, you need to configu
```conf
managedLedgerOffloadDriver=filesystem
- fileSystemProfilePath=../conf/filesystem_offload_core_site.xml
+ fileSystemProfilePath=conf/filesystem_offload_core_site.xml
```
diff --git a/site2/website/versioned_docs/version-2.9.x/cookbooks-compaction.md
b/site2/website/versioned_docs/version-2.9.x/cookbooks-compaction.md
index dfa31472724..8e4ed064438 100644
--- a/site2/website/versioned_docs/version-2.9.x/cookbooks-compaction.md
+++ b/site2/website/versioned_docs/version-2.9.x/cookbooks-compaction.md
@@ -10,8 +10,8 @@ Pulsar's [topic
compaction](concepts-topic-compaction.md#compaction) feature ena
To use compaction:
* You need to give messages keys, as topic compaction in Pulsar takes place on
a *per-key basis* (i.e. messages are compacted based on their key). For a stock
ticker use case, the stock symbol---e.g. `AAPL` or `GOOG`---could serve as the
key (more on this [below](#when-should-i-use-compacted-topics)). Messages
without keys will be left alone by the compaction process.
-* Compaction can be configured to run
[automatically](#configuring-compaction-to-run-automatically), or you can
manually [trigger](#triggering-compaction-manually) compaction using the Pulsar
administrative API.
-* Your consumers must be [configured](#consumer-configuration) to read from
compacted topics ([Java consumers](#java), for example, have a `readCompacted`
setting that must be set to `true`). If this configuration is not set,
consumers will still be able to read from the non-compacted topic.
+* Compaction can be configured to run
[automatically](#configure-compaction-to-run-automatically), or you can
manually [trigger](#trigger-compaction-manually) compaction using the Pulsar
administrative API.
+* Your consumers must be [configured](#configure-consumers) to read from
compacted topics (Java consumers, for example, have a `readCompacted` setting
that must be set to `true`). If this configuration is not set, consumers will
still be able to read from the non-compacted topic.
> Compaction only works on messages that have keys (as in the stock ticker
> example the stock symbol serves as the key for each message). Keys can thus
> be thought of as the axis along which compaction is applied. Messages that
> don't have keys are simply ignored by compaction.
@@ -23,16 +23,18 @@ The classic example of a topic that could benefit from
compaction would be a sto
* They can read from the "original," non-compacted topic in case they need
access to "historical" values, i.e. the entirety of the topic's messages.
* They can read from the compacted topic if they only want to see the most
up-to-date messages.
-Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#consumer-con [...]
+Thus, if you're using a Pulsar topic called `stock-values`, some consumers
could have access to all messages in the topic (perhaps because they're
performing some kind of number crunching of all values in the last hour) while
the consumers used to power the real-time stock ticker only see the compacted
topic (and thus aren't forced to process outdated messages). Which variant of
the topic any given consumer pulls messages from is determined by the
consumer's [configuration](#configure-co [...]
> One of the benefits of compaction in Pulsar is that you aren't forced to
> choose between compacted and non-compacted topics, as the compaction process
> leaves the original topic as-is and essentially adds an alternate topic. In
> other words, you can run compaction on a topic and consumers that need
> access to the non-compacted version of the topic will not be adversely
> affected.
-## Configuring compaction to run automatically
+## Configure compaction to run automatically
-Tenant administrators can configure a policy for compaction at the namespace
level. The policy specifies how large the topic backlog can grow before
compaction is triggered.
+Compaction policy specifies how large the topic backlog can grow before
compaction is triggered.
-For example, to trigger compaction when the backlog reaches 100MB:
+Tenant administrators can configure a compaction policy at namespace or topic
levels. Configuring the compaction policy at the namespace level applies to all
topics within that namespace.
+
+For example, to trigger compaction in a namespace when the backlog reaches
100MB:
```bash
@@ -41,9 +43,13 @@ $ bin/pulsar-admin namespaces set-compaction-threshold \
```
-Configuring the compaction threshold on a namespace will apply to all topics
within that namespace.
+:::note
+
+To configure the compaction policy at the topic level, you need to enable
[topic-level
policy](concepts-multi-tenancy.md#namespace-change-events-and-topic-level-policies)
first.
+
+:::
-## Triggering compaction manually
+## Trigger compaction manually
In order to run compaction on a topic, you need to use the [`topics
compact`](reference-pulsar-admin.md#topics-compact) command for the
[`pulsar-admin`](reference-pulsar-admin.md) CLI tool. Here's an example:
@@ -79,15 +85,15 @@ $ bin/pulsar compact-topic \
```
-#### When should I trigger compaction?
+:::tip
-How often you [trigger compaction](#triggering-compaction-manually) will vary
widely based on the use case. If you want a compacted topic to be extremely
speedy on read, then you should run compaction fairly frequently.
+The frequency to trigger topic compaction varies widely based on use cases. If
you want a compacted topic to be extremely speedy on read, then you need to run
compaction fairly frequently.
-## Consumer configuration
+:::
-Pulsar consumers and readers need to be configured to read from compacted
topics. The sections below show you how to enable compacted topic reads for
Pulsar's language clients.
+## Configure consumers
-### Java
+Pulsar consumers and readers need to be configured to read from compacted
topics. The section below introduces how to enable compacted topic reads for
Java clients.
In order to read from a compacted topic using a Java consumer, the
`readCompacted` parameter must be set to `true`. Here's an example consumer for
a compacted topic:
diff --git a/site2/website/versioned_docs/version-2.9.x/io-debezium-source.md
b/site2/website/versioned_docs/version-2.9.x/io-debezium-source.md
index f739a4cdc49..4ed7f4a8d26 100644
--- a/site2/website/versioned_docs/version-2.9.x/io-debezium-source.md
+++ b/site2/website/versioned_docs/version-2.9.x/io-debezium-source.md
@@ -25,10 +25,10 @@ The configuration of Debezium source connector has the
following properties.
| `key.converter` | true | null | The converter provided by Kafka Connect to
convert record key. |
| `value.converter` | true | null | The converter provided by Kafka Connect to
convert record value. |
| `database.history` | true | null | The name of the database history class. |
-| `database.history.pulsar.topic` | true | null | The name of the database
history topic where the connector writes and recovers DDL statements. <br /><br
/>**Note: this topic is for internal use only and should not be used by
consumers.** |
+| `database.history.pulsar.topic` | true | null | The name of the database
history topic where the connector writes and recovers DDL statements. <br /><br
/>**Note: This topic is for internal use only and should not be used by
consumers.** |
| `database.history.pulsar.service.url` | true | null | Pulsar cluster service
URL for history topic. |
| `offset.storage.topic` | true | null | Record the last committed offsets
that the connector successfully completes. |
-| `json-with-envelope` | false | false | Present the message only consist of
payload.
+| `json-with-envelope` | false | false | Present the message that only
consists of payload.|
| `database.history.pulsar.reader.config` | false | null | The configs of the
reader for the database schema history topic, in the form of a JSON string with
key-value pairs. <br />**Note:** This property is only available in 2.9.4 and
later versions. |
| `offset.storage.reader.config` | false | null | The configs of the reader
for the kafka connector offsets topic, in the form of a JSON string with
key-value pairs. <br />**Note:** This property is only available in 2.9.4 and
later versions.|
diff --git
a/site2/website/versioned_docs/version-2.9.x/reference-configuration.md
b/site2/website/versioned_docs/version-2.9.x/reference-configuration.md
index 11b77269900..e90124fbe1d 100644
--- a/site2/website/versioned_docs/version-2.9.x/reference-configuration.md
+++ b/site2/website/versioned_docs/version-2.9.x/reference-configuration.md
@@ -40,7 +40,7 @@ BookKeeper is a replicated log storage system that Pulsar
uses for durable stora
|zkLedgersRootPath|The root ZooKeeper path used to store ledger metadata. This
parameter is used by the ZooKeeper-based ledger manager as a root znode to
store all ledgers.|/ledgers|
|ledgerStorageClass|Ledger storage implementation
class|org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage|
|entryLogFilePreallocationEnabled|Enable or disable entry logger
preallocation|true|
-|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log
file will be created when the old one reaches the file size
limitation.|2147483648|
+|logSizeLimit|Max file size of the entry logger, in bytes. A new entry log
file will be created when the old one reaches the file size
limitation.|1073741824|
|minorCompactionThreshold|Threshold of minor compaction. Entry log files whose
remaining size percentage reaches below this threshold will be compacted in a
minor compaction. If set to less than zero, the minor compaction is
disabled.|0.2|
|minorCompactionInterval|Time interval to run minor compaction, in seconds. If
set to less than zero, the minor compaction is disabled. Note: should be
greater than gcWaitTime. |3600|
|majorCompactionThreshold|The threshold of major compaction. Entry log files
whose remaining size percentage reaches below this threshold will be compacted
in a major compaction. Those entry log files whose remaining size percentage is
still higher than the threshold will never be compacted. If set to less than
zero, the minor compaction is disabled.|0.5|
diff --git
a/site2/website/versioned_docs/version-2.9.x/tiered-storage-filesystem.md
b/site2/website/versioned_docs/version-2.9.x/tiered-storage-filesystem.md
index a5844d22fb5..8164e68208b 100644
--- a/site2/website/versioned_docs/version-2.9.x/tiered-storage-filesystem.md
+++ b/site2/website/versioned_docs/version-2.9.x/tiered-storage-filesystem.md
@@ -98,7 +98,7 @@ You can configure filesystem offloader driver in the
configuration file `broker.
|---|---|---
`managedLedgerOffloadDriver` | Offloader driver name, which is
case-insensitive. | filesystem
`fileSystemURI` | Connection address | hdfs://127.0.0.1:9000
- `fileSystemProfilePath` | Hadoop profile path |
../conf/filesystem_offload_core_site.xml
+ `fileSystemProfilePath` | Hadoop profile path |
conf/filesystem_offload_core_site.xml
- **Optional** configurations are as below.
@@ -139,11 +139,11 @@ The configuration file is stored in the Hadoop profile
path. It contains various
##### Example
-This example sets the Hadoop profile path as
_../conf/filesystem_offload_core_site.xml_.
+This example sets the Hadoop profile path as
_conf/filesystem_offload_core_site.xml_.
```conf
-fileSystemProfilePath=../conf/filesystem_offload_core_site.xml
+fileSystemProfilePath=conf/filesystem_offload_core_site.xml
```
diff --git a/wiki/release/release-process.md b/wiki/release/release-process.md
index 91339f0b556..f2af01084ca 100644
--- a/wiki/release/release-process.md
+++ b/wiki/release/release-process.md
@@ -473,18 +473,16 @@ Steps and examples see [Pulsar Release Notes
Guide](https://docs.google.com/docu
## 17. Update the site
-The workflow for updating the site is slightly different for major and minor
releases.
-
-### Update the site for major releases
-For major release, the website is updated based on the `master` branch.
+### Update the site for minor releases
+For minor releases, such as 2.10, the website is updated based on the `master`
branch.
-1. Create a new branch off master
+1. Create a new branch off master.
```shell
git checkout -b doc_release_<release-version>
```
-2. Go to the website directory
+2. Go to the website directory.
```shell
cd site2/website
@@ -504,38 +502,28 @@ After you run this command, a new folder
`version-<release-version>` is added in
versioned_sidebars/version-<release-version>-sidebars.json
```
-> Note: You can move the latest version under the old version in the
`versions.json` file. Make sure the Algolia index works before moving 2.X.0 as
the current stable.
+> **Note**
+> You can move the latest version under the old version in the `versions.json`
file. Make sure the Algolia index works before moving 2.X.0 as the current
stable.
-4. Update `releases.json` file by adding `<release-version>` to the second of
the list(this is to make search could work. After your PR is merged, the Pulsar
website is built and tagged for search, you can change it to the first list).
+4. Update the `releases.json` file by adding `<release-version>` to the second
of the list (this is to make the search work. After your PR is merged, the
Pulsar website is built and tagged for search, you can change it to the first
list).
5. Send out a PR request for review.
- After your PR is approved and merged to master, the website is published
automatically after new website build. The website is built every 6 hours.
+ After your PR is approved and merged to master, the website is published
automatically after the new website is built. The website is built every 6
hours.
-6. Check the new website after website build.
- Open https://pulsar.apache.org in your browsers to verify all the changes
are alive. If the website build succeeds but the website is not updated, you
can try to Sync git repository. Navigate to https://selfserve.apache.org/ and
click the "Synchronize Git Repositories" and then select apache/pulsar.
+6. Check the new website after the website is built.
+ Open https://pulsar.apache.org in your browsers to verify all the changes
are alive. If the website build succeeds but the website is not updated, you
can try to sync the git repository. Navigate to https://selfserve.apache.org/
and click the "Synchronize Git Repositories" and then select apache/pulsar.
-7. Publish the release on GitHub, and copy the same release notes:
https://github.com/apache/pulsar/releases
+7. Publish the release on GitHub, and copy the same release notes:
https://github.com/apache/pulsar/releases.
-8. Update the deploy version to the current release version in
deployment/terraform-ansible/deploy-pulsar.yaml
-
-9. Generate the doc set and sidebar file for the next minor release `2.X.1`
based on the `site2/docs` folder. You can follow step 1, 2, 3 and submit those
files to apache/pulsar repository. This step is a preparation for `2.X.1`
release.
-
-### Update the site for minor releases
+8. Update the deploy version to the current release version in
`deployment/terraform-ansible/deploy-pulsar.yaml`.
-The new updates for the minor release docs are processed in its doc set and
sidebar file directly before release. You can follow step 4~8 (in major
release) to update the site. You'll also need to add this new version to
`versions.json`.
-
-To make preparation for the next minor release, you need to generate the doc
set and sidebar file based on the previous release. Take `2.X.2` as example,
`2.X.2` doc set and sidebar.json file are generated based on `2.X.1`. You can
make it with the following steps:
-
-1. Copy the `version-2.X.1` doc set and `version-2.X.1-sidebars.json` file and
rename them as `version-2.X.2` doc set and `version-2.X.2-sidebars.json` file.
-
-2. Update the "id" from `version-2.X.1` to `version-2.X.2` for the md files in
the `version-2.X.2` doc set and `version-2.X.2-sidebars.json` file.
-
-3. Submit the new doc set and sidebar.json file to the apache/pulsar
repository.
+9. Generate the doc set and sidebar file for the next minor release `2.X.x`
based on the `site2/docs` folder. You can follow steps 1, 2, and 3, and submit
those files to the `apache/pulsar` repository. This step is a preparation for
the `2.X.x` release.
> **Note**
-> - The `yarn run version <release-version>` command generates the new doc set
and sidebar.json file based on the `site2/docs` folder.
-> - The minor release doc is generated based on the previous minor release
(e.g.: `2.X.2` doc is generated based on `2.X.1`, and `2.X.3`doc is generated
based on `2.X.2`), so you cannot use the `yarn run version <release-version>`
command directly.
+> Starting from 2.8, you don't need to generate an independent doc set or
update the Pulsar site for bug-fix releases, such as 2.8.1, 2.8.2, and so on.
Instead, the generic doc set 2.8.x is used.
+
+:::
## 18. Announce the release