This is an automated email from the ASF dual-hosted git repository.

liuyu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new 81f26d6  [website][upgrade]feat: docs migration - 2.7.1 / deploy 
(#12608)
81f26d6 is described below

commit 81f26d6a5c64dce5f78fd058aaae346944a429cf
Author: Li Li <[email protected]>
AuthorDate: Fri Nov 5 11:42:36 2021 +0800

    [website][upgrade]feat: docs migration - 2.7.1 / deploy (#12608)
    
    * [website][upgrade]feat: docs migration - 2.7.1 / deploy
    
    Signed-off-by: LiLi <[email protected]>
    
    * patch
    
    Signed-off-by: LiLi <[email protected]>
---
 site2/docs/deploy-monitoring.md                    |   4 +-
 site2/website-next/docs/deploy-monitoring.md       |   4 +-
 .../versioned_docs/version-2.7.1/deploy-aws.md     | 274 +++++++++++
 .../deploy-bare-metal-multi-cluster.md             | 483 ++++++++++++++++++
 .../version-2.7.1/deploy-bare-metal.md             | 546 +++++++++++++++++++++
 .../versioned_docs/version-2.7.1/deploy-dcos.md    | 202 ++++++++
 .../versioned_docs/version-2.7.1/deploy-docker.md  |  64 +++
 .../version-2.7.1/deploy-kubernetes.md             |  15 +
 .../deploy-monitoring.md                           |   4 +-
 .../version-2.7.3/deploy-monitoring.md             |   4 +-
 .../version-2.8.0/deploy-monitoring.md             |   4 +-
 .../versioned_sidebars/version-2.7.1-sidebars.json |  34 ++
 .../version-2.7.0/deploy-monitoring.md             |   4 +-
 .../version-2.7.1/deploy-monitoring.md             |   4 +-
 .../version-2.7.2/deploy-monitoring.md             |   4 +-
 .../version-2.7.3/deploy-monitoring.md             |   4 +-
 .../version-2.8.0/deploy-monitoring.md             |   4 +-
 .../version-2.8.1/deploy-monitoring.md             |   4 +-
 .../version-2.8.2/deploy-monitoring.md             |   4 +-
 19 files changed, 1642 insertions(+), 24 deletions(-)

diff --git a/site2/docs/deploy-monitoring.md b/site2/docs/deploy-monitoring.md
index 14fb45d..87aba2e 100644
--- a/site2/docs/deploy-monitoring.md
+++ b/site2/docs/deploy-monitoring.md
@@ -123,5 +123,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
\ No newline at end of file
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
\ No newline at end of file
diff --git a/site2/website-next/docs/deploy-monitoring.md 
b/site2/website-next/docs/deploy-monitoring.md
index 221e5cb..fa9e6e2 100644
--- a/site2/website-next/docs/deploy-monitoring.md
+++ b/site2/website-next/docs/deploy-monitoring.md
@@ -147,5 +147,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
\ No newline at end of file
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
\ No newline at end of file
diff --git a/site2/website-next/versioned_docs/version-2.7.1/deploy-aws.md 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-aws.md
new file mode 100644
index 0000000..78defa1
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.7.1/deploy-aws.md
@@ -0,0 +1,274 @@
+---
+id: deploy-aws
+title: Deploying a Pulsar cluster on AWS using Terraform and Ansible
+sidebar_label: "Amazon Web Services"
+original_id: deploy-aws
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+> For instructions on deploying a single Pulsar cluster manually rather than 
using Terraform and Ansible, see [Deploying a Pulsar cluster on bare 
metal](deploy-bare-metal.md). For instructions on manually deploying a 
multi-cluster Pulsar instance, see [Deploying a Pulsar instance on bare 
metal](deploy-bare-metal-multi-cluster).
+
+One of the easiest ways to get a Pulsar 
[cluster](reference-terminology.md#cluster) running on [Amazon Web 
Services](https://aws.amazon.com/) (AWS) is to use the 
[Terraform](https://terraform.io) infrastructure provisioning tool and the 
[Ansible](https://www.ansible.com) server automation tool. Terraform can create 
the resources necessary for running the Pulsar 
cluster---[EC2](https://aws.amazon.com/ec2/) instances, networking and security 
infrastructure, etc.---While Ansible can install [...]
+
+## Requirements and setup
+
+In order to install a Pulsar cluster on AWS using Terraform and Ansible, you 
need to prepare the following things:
+
+* An [AWS account](https://aws.amazon.com/account/) and the 
[`aws`](https://aws.amazon.com/cli/) command-line tool
+* Python and [pip](https://pip.pypa.io/en/stable/)
+* The [`terraform-inventory`](https://github.com/adammck/terraform-inventory) 
tool, which enables Ansible to use Terraform artifacts
+
+You also need to make sure that you are currently logged into your AWS account 
via the `aws` tool:
+
+```bash
+
+$ aws configure
+
+```
+
+## Installation
+
+You can install Ansible on Linux or macOS using pip.
+
+```bash
+
+$ pip install ansible
+
+```
+
+You can install Terraform using the instructions 
[here](https://www.terraform.io/intro/getting-started/install.html).
+
+You also need to have the Terraform and Ansible configuration for Pulsar 
locally on your machine. You can find them in the [GitHub 
repository](https://github.com/apache/pulsar) of Pulsar, which you can fetch 
using Git commands:
+
+```bash
+
+$ git clone https://github.com/apache/pulsar
+$ cd pulsar/deployment/terraform-ansible/aws
+
+```
+
+## SSH setup
+
+> If you already have an SSH key and want to use it, you can skip the step of 
generating an SSH key and update `private_key_file` setting
+> in `ansible.cfg` file and `public_key_path` setting in `terraform.tfvars` 
file.
+>
+> For example, if you already have a private SSH key in `~/.ssh/pulsar_aws` 
and a public key in `~/.ssh/pulsar_aws.pub`,
+> follow the steps below:
+>
+> 1. update `ansible.cfg` with following values:
+>
+
+> ```shell
+> 
+> private_key_file=~/.ssh/pulsar_aws
+>
+> 
+> ```
+
+>
+> 2. update `terraform.tfvars` with following values:
+>
+
+> ```shell
+> 
+> public_key_path=~/.ssh/pulsar_aws.pub
+>
+> 
+> ```
+
+In order to create the necessary AWS resources using Terraform, you need to 
create an SSH key. Enter the following commands to create a private SSH key in 
`~/.ssh/id_rsa` and a public key in `~/.ssh/id_rsa.pub`:
+
+```bash
+
+$ ssh-keygen -t rsa
+
+```
+
+Do *not* enter a passphrase (hit **Enter** instead when the prompt comes out). 
Enter the following command to verify that a key has been created:
+
+```bash
+
+$ ls ~/.ssh
+id_rsa               id_rsa.pub
+
+```
+
+## Create AWS resources using Terraform
+
+To start building AWS resources with Terraform, you need to install all 
Terraform dependencies. Enter the following command:
+
+```bash
+
+$ terraform init
+# This will create a .terraform folder
+
+```
+
+After that, you can apply the default Terraform configuration by entering this 
command:
+
+```bash
+
+$ terraform apply
+
+```
+
+Then you see this prompt below:
+
+```bash
+
+Do you want to perform these actions?
+  Terraform will perform the actions described above.
+  Only 'yes' will be accepted to approve.
+
+  Enter a value:
+
+```
+
+Type `yes` and hit **Enter**. Applying the configuration could take several 
minutes. When the configuration applying finishes, you can see `Apply 
complete!` along with some other information, including the number of resources 
created.
+
+### Apply a non-default configuration
+
+You can apply a non-default Terraform configuration by changing the values in 
the `terraform.tfvars` file. The following variables are available:
+
+Variable name | Description | Default
+:-------------|:------------|:-------
+`public_key_path` | The path of the public key that you have generated. | 
`~/.ssh/id_rsa.pub`
+`region` | The AWS region in which the Pulsar cluster runs | `us-west-2`
+`availability_zone` | The AWS availability zone in which the Pulsar cluster 
runs | `us-west-2a`
+`aws_ami` | The [Amazon Machine 
Image](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) (AMI) that 
the cluster uses  | `ami-9fa343e7`
+`num_zookeeper_nodes` | The number of 
[ZooKeeper](https://zookeeper.apache.org) nodes in the ZooKeeper cluster | 3
+`num_bookie_nodes` | The number of bookies that runs in the cluster | 3
+`num_broker_nodes` | The number of Pulsar brokers that runs in the cluster | 2
+`num_proxy_nodes` | The number of Pulsar proxies that runs in the cluster | 1
+`base_cidr_block` | The root 
[CIDR](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing) that 
network assets uses for the cluster | `10.0.0.0/16`
+`instance_types` | The EC2 instance types to be used. This variable is a map 
with two keys: `zookeeper` for the ZooKeeper instances, `bookie` for the 
BookKeeper bookies and `broker` and `proxy` for Pulsar brokers and bookies | 
`t2.small` (ZooKeeper), `i3.xlarge` (BookKeeper) and `c5.2xlarge` 
(Brokers/Proxies)
+
+### What is installed
+
+When you run the Ansible playbook, the following AWS resources are used:
+
+* 9 total [Elastic Compute Cloud](https://aws.amazon.com/ec2) (EC2) instances 
running the [ami-9fa343e7](https://access.redhat.com/articles/3135091) Amazon 
Machine Image (AMI), which runs [Red Hat Enterprise Linux (RHEL) 
7.4](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/7.4_release_notes/index).
 By default, that includes:
+  * 3 small VMs for ZooKeeper 
([t2.small](https://www.ec2instances.info/?selected=t2.small) instances)
+  * 3 larger VMs for BookKeeper [bookies](reference-terminology.md#bookie) 
([i3.xlarge](https://www.ec2instances.info/?selected=i3.xlarge) instances)
+  * 2 larger VMs for Pulsar [brokers](reference-terminology.md#broker) 
([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+  * 1 larger VMs for Pulsar [proxy](reference-terminology.md#proxy) 
([c5.2xlarge](https://www.ec2instances.info/?selected=c5.2xlarge) instances)
+* An EC2 [security 
group](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html)
+* A [virtual private cloud](https://aws.amazon.com/vpc/) (VPC) for security
+* An [API Gateway](https://aws.amazon.com/api-gateway/) for connections from 
the outside world
+* A [route 
table](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html)
 for the Pulsar cluster's VPC
+* A 
[subnet](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html)
 for the VPC
+
+All EC2 instances for the cluster run in the 
[us-west-2](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html)
 region.
+
+### Fetch your Pulsar connection URL
+
+When you apply the Terraform configuration by entering the command `terraform 
apply`, Terraform outputs a value for the `pulsar_service_url`. The value 
should look something like this:
+
+```
+
+pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650
+
+```
+
+You can fetch that value at any time by entering the command `terraform output 
pulsar_service_url` or parsing the `terraform.tstate` file (which is JSON, even 
though the filename does not reflect that):
+
+```bash
+
+$ cat terraform.tfstate | jq .modules[0].outputs.pulsar_service_url.value
+
+```
+
+### Destroy your cluster
+
+At any point, you can destroy all AWS resources associated with your cluster 
using Terraform's `destroy` command:
+
+```bash
+
+$ terraform destroy
+
+```
+
+## Setup Disks
+
+Before you run the Pulsar playbook, you need to mount the disks to the correct 
directories on those bookie nodes. Since different type of machines have 
different disk layout, you need to update the task defined in `setup-disk.yaml` 
file after changing the `instance_types` in your terraform config,
+
+To setup disks on bookie nodes, enter this command:
+
+```bash
+
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  setup-disk.yaml
+
+```
+
+After that, the disks is mounted under `/mnt/journal` as journal disk, and 
`/mnt/storage` as ledger disk.
+Remember to enter this command just only once. If you attempt to enter this 
command again after you have run Pulsar playbook, your disks might potentially 
be erased again, causing the bookies to fail to start up.
+
+## Run the Pulsar playbook
+
+Once you have created the necessary AWS resources using Terraform, you can 
install and run Pulsar on the Terraform-created EC2 instances using Ansible. 
+
+(Optional) If you want to use any [built-in IO connectors](io-connectors) , 
edit the `Download Pulsar IO packages` task in the `deploy-pulsar.yaml` file 
and uncomment the connectors you want to use. 
+
+To run the playbook, enter this command:
+
+```bash
+
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  ../deploy-pulsar.yaml
+
+```
+
+If you have created a private SSH key at a location different from 
`~/.ssh/id_rsa`, you can specify the different location using the 
`--private-key` flag in the following command:
+
+```bash
+
+$ ansible-playbook \
+  --user='ec2-user' \
+  --inventory=`which terraform-inventory` \
+  --private-key="~/.ssh/some-non-default-key" \
+  ../deploy-pulsar.yaml
+
+```
+
+## Access the cluster
+
+You can now access your running Pulsar using the unique Pulsar connection URL 
for your cluster, which you can obtain following the instructions 
[above](#fetching-your-pulsar-connection-url).
+
+For a quick demonstration of accessing the cluster, we can use the Python 
client for Pulsar and the Python shell. First, install the Pulsar Python module 
using pip:
+
+```bash
+
+$ pip install pulsar-client
+
+```
+
+Now, open up the Python shell using the `python` command:
+
+```bash
+
+$ python
+
+```
+
+Once you are in the shell, enter the following command:
+
+```python
+
+>>> import pulsar
+>>> client = 
pulsar.Client('pulsar://pulsar-elb-1800761694.us-west-2.elb.amazonaws.com:6650')
+# Make sure to use your connection URL
+>>> producer = client.create_producer('persistent://public/default/test-topic')
+>>> producer.send('Hello world')
+>>> client.close()
+
+```
+
+If all of these commands are successful, Pulsar clients can now use your 
cluster!
diff --git 
a/site2/website-next/versioned_docs/version-2.7.1/deploy-bare-metal-multi-cluster.md
 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-bare-metal-multi-cluster.md
new file mode 100644
index 0000000..783b171
--- /dev/null
+++ 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-bare-metal-multi-cluster.md
@@ -0,0 +1,483 @@
+---
+id: deploy-bare-metal-multi-cluster
+title: Deploying a multi-cluster on bare metal
+sidebar_label: "Bare metal multi-cluster"
+original_id: deploy-bare-metal-multi-cluster
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the 
most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using it in a startup or on a single team, you had better opt for 
a single cluster. For instructions on deploying a single cluster,
+> see the guide [here](deploy-bare-metal).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview) connectors in 
your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` 
directory in the pulsar directory on every broker node or on every 
function-worker node if you
+> run a separate cluster of function workers for [Pulsar 
Functions](functions-overview).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage) feature in 
your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory 
in the pulsar directory on every broker node. For more details of how to 
configure
+> this feature, you can refer to the [Tiered storage 
cookbook](cookbooks-tiered-storage).
+
+A Pulsar *instance* consists of multiple Pulsar clusters working in unison. 
You can distribute clusters across data centers or geographical regions and 
replicate the clusters amongst themselves using 
[geo-replication](administration-geo). Deploying a multi-cluster Pulsar 
instance involves the following basic steps:
+
+* Deploying two separate [ZooKeeper](#deploy-zookeeper) quorums: a 
[local](#deploy-local-zookeeper) quorum for each cluster in the instance and a 
[configuration store](#configuration-store) quorum for instance-wide tasks
+* Initializing [cluster metadata](#cluster-metadata-initialization) for each 
cluster
+* Deploying a [BookKeeper cluster](#deploy-bookkeeper) of bookies in each 
Pulsar cluster
+* Deploying [brokers](#deploy-brokers) in each Pulsar cluster
+
+If you want to deploy a single Pulsar cluster, see [Clusters and 
Brokers](getting-started-standalone.md#start-the-cluster).
+
+> #### Run Pulsar locally or on Kubernetes?
+> This guide shows you how to deploy Pulsar in production in a non-Kubernetes 
environment. If you want to run a standalone Pulsar cluster on a single machine 
for development purposes, see the [Setting up a local 
cluster](getting-started-standalone.md) guide. If you want to run Pulsar on 
[Kubernetes](https://kubernetes.io), see the [Pulsar on 
Kubernetes](deploy-kubernetes) guide, which includes sections on running Pulsar 
on Kubernetes on [Google Kubernetes Engine](deploy-kubernetes#pulsar [...]
+
+## System requirement
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and 
**Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later 
versions.
+
+## Install Pulsar
+
+To get started running Pulsar, download a binary tarball release in one of the 
following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ 
binary release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases 
page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget 
'https://www.apache.org/dyn/mirrors/mirrors.cgi?action=download&filename=pulsar/pulsar-@pulsar:version@/apache-pulsar-@pulsar:[email protected]'
 -O apache-pulsar-@pulsar:[email protected]
+  
+  ```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+
+$ tar xvfz apache-pulsar-@pulsar:[email protected]
+$ cd apache-pulsar-@pulsar:version@
+
+```
+
+## What your package contains
+
+The Pulsar binary package initially contains the following directories:
+
+Directory | Contains
+:---------|:--------
+`bin` | [Command-line tools](reference-cli-tools) of Pulsar, such as 
[`pulsar`](reference-cli-tools.md#pulsar) and 
[`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/)
+`conf` | Configuration files for Pulsar, including for [broker 
configuration](reference-configuration.md#broker), [ZooKeeper 
configuration](reference-configuration.md#zookeeper), and more
+`examples` | A Java JAR file containing example [Pulsar 
Functions](functions-overview)
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that 
Pulsar uses 
+`licenses` | License files, in `.txt` form, for various components of the 
Pulsar codebase
+
+The following directories are created once you begin running Pulsar:
+
+Directory | Contains
+:---------|:--------
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`instances` | Artifacts created for [Pulsar Functions](functions-overview)
+`logs` | Logs that the installation creates
+
+
+## Deploy ZooKeeper
+
+Each Pulsar instance relies on two separate ZooKeeper quorums.
+
+* [Local ZooKeeper](#deploy-local-zookeeper) operates at the cluster level and 
provides cluster-specific configuration management and coordination. Each 
Pulsar cluster needs to have a dedicated ZooKeeper cluster.
+* [Configuration Store](#deploy-the-configuration-store) operates at the 
instance level and provides configuration management for the entire system (and 
thus across clusters). An independent cluster of machines or the same machines 
that local ZooKeeper uses can provide the configuration store quorum.
+
+The configuration store quorum can be provided by an independent cluster of 
machines or by the same machines used by local ZooKeeper.
+
+
+### Deploy local ZooKeeper
+
+ZooKeeper manages a variety of essential coordination-related and 
configuration-related tasks for Pulsar.
+
+You need to stand up one local ZooKeeper cluster *per Pulsar cluster* for 
deploying a Pulsar instance. 
+
+To begin, add all ZooKeeper servers to the quorum configuration specified in 
the [`conf/zookeeper.conf`](reference-configuration.md#zookeeper) file. Add a 
`server.N` line for each node in the cluster to the configuration, where `N` is 
the number of the ZooKeeper node. The following is an example for a three-node 
cluster:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+On each host, you need to specify the ID of the node in the `myid` file of 
each node, which is in `data/zookeeper` folder of each server by default (you 
can change the file location via the 
[`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
+
+> See the [Multi-server setup 
guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup)
 in the ZooKeeper documentation for detailed information on `myid` and more.
+
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set 
the `myid` value like this:
+
+```shell
+
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+
+```
+
+On `zk2.us-west.example.com` the command looks like `echo 2 > 
data/zookeeper/myid` and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and each server 
has the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the 
background, using nohup) with the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+
+$ bin/pulsar-daemon start zookeeper
+
+```
+
+### Deploy the configuration store 
+
+The ZooKeeper cluster that is configured and started up in the section above 
is a *local* ZooKeeper cluster that you can use to manage a single Pulsar 
cluster. In addition to a local cluster, however, a full Pulsar instance also 
requires a configuration store for handling some instance-level configuration 
and coordination tasks.
+
+If you deploy a [single-cluster](#single-cluster-pulsar-instance) instance, 
you do not need a separate cluster for the configuration store. If, however, 
you deploy a [multi-cluster](#multi-cluster-pulsar-instance) instance, you 
should stand up a separate ZooKeeper cluster for configuration tasks.
+
+#### Single-cluster Pulsar instance
+
+If your Pulsar instance consists of just one cluster, then you can deploy a 
configuration store on the same machines as the local ZooKeeper quorum but run 
on different TCP ports.
+
+To deploy a ZooKeeper configuration store in a single-cluster instance, add 
the same ZooKeeper servers that the local quorum uses to the configuration file 
in 
[`conf/global_zookeeper.conf`](reference-configuration.md#configuration-store) 
using the same method for [local ZooKeeper](#local-zookeeper), but make sure to 
use a different port (2181 is the default for ZooKeeper). The following is an 
example that uses port 2184 for a three-node ZooKeeper cluster:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+
+```
+
+As before, create the `myid` files for each server on 
`data/global-zookeeper/myid`.
+
+#### Multi-cluster Pulsar instance
+
+When you deploy a global Pulsar instance, with clusters distributed across 
different geographical regions, the configuration store serves as a highly 
available and strongly consistent metadata store that can tolerate failures and 
partitions spanning whole regions.
+
+The key here is to make sure the ZK quorum members are spread across at least 
3 regions and that other regions run as observers.
+
+Again, given the very low expected load on the configuration store servers, 
you can
+share the same hosts used for the local ZooKeeper quorum.
+
+For example, assume a Pulsar instance with the following clusters `us-west`,
+`us-east`, `us-central`, `eu-central`, `ap-south`. Also assume, each cluster 
has its own local ZK servers named such as the following: 
+
+```
+
+zk[1-3].${CLUSTER}.example.com
+
+```
+
+In this scenario if you want to pick the quorum participants from few clusters 
and
+let all the others be ZK observers. For example, to form a 7 servers quorum, 
you can pick 3 servers from `us-west`, 2 from `us-central` and 2 from `us-east`.
+
+This method guarantees that writes to configuration store is possible even if 
one of these regions is unreachable.
+
+The ZK configuration in all the servers looks like:
+
+```properties
+
+clientPort=2184
+server.1=zk1.us-west.example.com:2185:2186
+server.2=zk2.us-west.example.com:2185:2186
+server.3=zk3.us-west.example.com:2185:2186
+server.4=zk1.us-central.example.com:2185:2186
+server.5=zk2.us-central.example.com:2185:2186
+server.6=zk3.us-central.example.com:2185:2186:observer
+server.7=zk1.us-east.example.com:2185:2186
+server.8=zk2.us-east.example.com:2185:2186
+server.9=zk3.us-east.example.com:2185:2186:observer
+server.10=zk1.eu-central.example.com:2185:2186:observer
+server.11=zk2.eu-central.example.com:2185:2186:observer
+server.12=zk3.eu-central.example.com:2185:2186:observer
+server.13=zk1.ap-south.example.com:2185:2186:observer
+server.14=zk2.ap-south.example.com:2185:2186:observer
+server.15=zk3.ap-south.example.com:2185:2186:observer
+
+```
+
+Additionally, ZK observers need to have the following parameters:
+
+```properties
+
+peerType=observer
+
+```
+
+##### Start the service
+
+Once your configuration store configuration is in place, you can start up the 
service using [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon)
+
+```shell
+
+$ bin/pulsar-daemon start configuration-store
+
+```
+
+## Cluster metadata initialization
+
+Once you set up the cluster-specific ZooKeeper and configuration store quorums 
for your instance, you need to write some metadata to ZooKeeper for each 
cluster in your instance. **you only needs to write these metadata once**.
+
+You can initialize this metadata using the 
[`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata)
 command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. The 
following is an example:
+
+```shell
+
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster us-west \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2184 \
+  --web-service-url http://pulsar.us-west.example.com:8080/ \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443/ \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650/ \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651/
+
+```
+
+As you can see from the example above, you need to specify the following:
+
+* The name of the cluster
+* The local ZooKeeper connection string for the cluster
+* The configuration store connection string for the entire instance
+* The web service URL for the cluster
+* A broker service URL enabling interaction with the 
[brokers](reference-terminology.md#broker) in the cluster
+
+If you use [TLS](security-tls-transport), you also need to specify a TLS web 
service URL for the cluster as well as a TLS broker service URL for the brokers 
in the cluster.
+
+Make sure to run `initialize-cluster-metadata` for each cluster in your 
instance.
+
+## Deploy BookKeeper
+
+BookKeeper provides [persistent message 
storage](concepts-architecture-overview.md#persistent-storage) for Pulsar.
+
+Each Pulsar broker needs to have its own cluster of bookies. The BookKeeper 
cluster shares a local ZooKeeper quorum with the Pulsar cluster.
+
+### Configure bookies
+
+You can configure BookKeeper bookies using the 
[`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration 
file. The most important aspect of configuring each bookie is ensuring that the 
[`zkServers`](reference-configuration.md#bookkeeper-zkServers) parameter is set 
to the connection string for the local ZooKeeper of Pulsar cluster.
+
+### Start bookies
+
+You can start a bookie in two ways: in the foreground or as a background 
daemon.
+
+To start a bookie in the background, use the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start bookie
+
+```
+
+You can verify that the bookie works properly using the `bookiesanity` command 
for the [BookKeeper shell](reference-cli-tools.md#bookkeeper-shell):
+
+```shell
+
+$ bin/bookkeeper shell bookiesanity
+
+```
+
+This command creates a new ledger on the local bookie, writes a few entries, 
reads them back and finally deletes the ledger.
+
+After you have started all bookies, you can use the `simpletest` command for 
[BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify 
that all bookies in the cluster are running.
+
+```bash
+
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum 
<num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+
+```
+
+Bookie hosts are responsible for storing message data on disk. In order for 
bookies to provide optimal performance, having a suitable hardware 
configuration is essential for the bookies. The following are key dimensions 
for bookie hardware capacity.
+
+* Disk I/O capacity read/write
+* Storage capacity
+
+Message entries written to bookies are always synced to disk before returning 
an acknowledgement to the Pulsar broker. To ensure low write latency, 
BookKeeper is
+designed to use multiple devices:
+
+* A **journal** to ensure durability. For sequential writes, having fast 
[fsync](https://linux.die.net/man/2/fsync) operations on bookie hosts is 
critical. Typically, small and fast [solid-state 
drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) should suffice, 
or [hard disk drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) 
with a [RAID](https://en.wikipedia.org/wiki/RAID)s controller and a 
battery-backed write cache. Both solutions can reach fsync latency of ~0.4 ms.
+* A **ledger storage device** is where data is stored until all consumers 
acknowledge the message. Writes happen in the background, so write I/O is not a 
big concern. Reads happen sequentially most of the time and the backlog is 
drained only in case of consumer drain. To store large amounts of data, a 
typical configuration involves multiple HDDs with a RAID controller.
+
+
+
+## Deploy brokers
+
+Once you set up ZooKeeper, initialize cluster metadata, and spin up BookKeeper 
bookies, you can deploy brokers.
+
+### Broker configuration
+
+You can configure brokers using the 
[`conf/broker.conf`](reference-configuration.md#broker) configuration file.
+
+The most important element of broker configuration is ensuring that each 
broker is aware of its local ZooKeeper quorum as well as the configuration 
store quorum. Make sure that you set the 
[`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) 
parameter to reflect the local quorum and the 
[`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers)
 parameter to reflect the configuration store quorum (although you need to 
specify only those  [...]
+
+You also need to specify the name of the 
[cluster](reference-terminology.md#cluster) to which the broker belongs using 
the [`clusterName`](reference-configuration.md#broker-clusterName) parameter. 
In addition, you need to match the broker and web service ports provided when 
you initialize the metadata (especially when you use a different port from 
default) of the cluster.
+
+The following is an example configuration:
+
+```properties
+
+# Local ZooKeeper servers
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Configuration store quorum connection string.
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+clusterName=us-west
+
+# Broker data port
+brokerServicePort=6650
+
+# Broker data port for TLS
+brokerServicePortTls=6651
+
+# Port to use to server HTTP request
+webServicePort=8080
+
+# Port to use to server HTTPS request
+webServicePortTls=8443
+
+```
+
+### Broker hardware
+
+Pulsar brokers do not require any special hardware since they do not use the 
local disk. You had better choose fast CPUs and 10Gbps 
[NIC](https://en.wikipedia.org/wiki/Network_interface_controller) so that the 
software can take full advantage of that.
+
+### Start the broker service
+
+You can start a broker in the background by using 
[nohup](https://en.wikipedia.org/wiki/Nohup) with the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```shell
+
+$ bin/pulsar-daemon start broker
+
+```
+
+You can also start brokers in the foreground by using [`pulsar 
broker`](reference-cli-tools.md#broker):
+
+```shell
+
+$ bin/pulsar broker
+
+```
+
+## Service discovery
+
+[Clients](getting-started-clients) connecting to Pulsar brokers need to be 
able to communicate with an entire Pulsar instance using a single URL. Pulsar 
provides a built-in service discovery mechanism that you can set up using the 
instructions [immediately below](#service-discovery-setup).
+
+You can also use your own service discovery system if you want. If you use 
your own system, you only need to satisfy just one requirement: when a client 
performs an HTTP request to an [endpoint](reference-configuration) for a Pulsar 
cluster, such as `http://pulsar.us-west.example.com:8080`, the client needs to 
be redirected to *some* active broker in the desired cluster, whether via DNS, 
an HTTP or IP redirect, or some other means.
+
+> #### Service discovery already provided by many scheduling systems
+> Many large-scale deployment systems, such as 
[Kubernetes](deploy-kubernetes), have service discovery systems built in. If 
you run Pulsar on such a system, you may not need to provide your own service 
discovery mechanism.
+
+
+### Service discovery setup
+
+The service discovery mechanism that included with Pulsar maintains a list of 
active brokers, which stored in ZooKeeper, and supports lookup using HTTP and 
also the [binary protocol](developing-binary-protocol) of Pulsar.
+
+To get started setting up the built-in service of discovery of Pulsar, you 
need to change a few parameters in the 
[`conf/discovery.conf`](reference-configuration.md#service-discovery) 
configuration file. Set the 
[`zookeeperServers`](reference-configuration.md#service-discovery-zookeeperServers)
 parameter to the ZooKeeper quorum connection string of the cluster and the 
[`configurationStoreServers`](reference-configuration.md#service-discovery-configurationStoreServers)
 setting to the [con [...]
+store](reference-terminology.md#configuration-store) quorum connection string.
+
+```properties
+
+# Zookeeper quorum connection string
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+# Global configuration store connection string
+configurationStoreServers=zk1.us-west.example.com:2184,zk2.us-west.example.com:2184,zk3.us-west.example.com:2184
+
+```
+
+To start the discovery service:
+
+```shell
+
+$ bin/pulsar-daemon start discovery
+
+```
+
+## Admin client and verification
+
+At this point your Pulsar instance should be ready to use. You can now 
configure client machines that can serve as [administrative 
clients](admin-api-overview) for each cluster. You can use the 
[`conf/client.conf`](reference-configuration.md#client) configuration file to 
configure admin clients.
+
+The most important thing is that you point the 
[`serviceUrl`](reference-configuration.md#client-serviceUrl) parameter to the 
correct service URL for the cluster:
+
+```properties
+
+serviceUrl=http://pulsar.us-west.example.com:8080/
+
+```
+
+## Provision new tenants
+
+Pulsar is built as a fundamentally multi-tenant system.
+
+
+If a new tenant wants to use the system, you need to create a new one. You can 
create a new tenant by using the 
[`pulsar-admin`](reference-pulsar-admin.md#tenants) CLI tool:
+
+```shell
+
+$ bin/pulsar-admin tenants create test-tenant \
+  --allowed-clusters us-west \
+  --admin-roles test-admin-role
+
+```
+
+In this command, users who identify with `test-admin-role` role can administer 
the configuration for the `test-tenant` tenant. The `test-tenant` tenant can 
only use the `us-west` cluster. From now on, this tenant can manage its 
resources.
+
+Once you create a tenant, you need to create 
[namespaces](reference-terminology.md#namespace) for topics within that tenant.
+
+
+The first step is to create a namespace. A namespace is an administrative unit 
that can contain many topics. A common practice is to create a namespace for 
each different use case from a single tenant.
+
+```shell
+
+$ bin/pulsar-admin namespaces create test-tenant/ns1
+
+```
+
+##### Test producer and consumer
+
+
+Everything is now ready to send and receive messages. The quickest way to test 
the system is through the [`pulsar-perf`](reference-cli-tools.md#pulsar-perf) 
client tool.
+
+
+You can use a topic in the namespace that you have just created. Topics are 
automatically created the first time when a producer or a consumer tries to use 
them.
+
+The topic name in this case could be:
+
+```http
+
+persistent://test-tenant/ns1/my-topic
+
+```
+
+Start a consumer that creates a subscription on the topic and waits for 
messages:
+
+```shell
+
+$ bin/pulsar-perf consume persistent://test-tenant/ns1/my-topic
+
+```
+
+Start a producer that publishes messages at a fixed rate and reports stats 
every 10 seconds:
+
+```shell
+
+$ bin/pulsar-perf produce persistent://test-tenant/ns1/my-topic
+
+```
+
+To report the topic stats:
+
+```shell
+
+$ bin/pulsar-admin topics stats persistent://test-tenant/ns1/my-topic
+
+```
+
diff --git 
a/site2/website-next/versioned_docs/version-2.7.1/deploy-bare-metal.md 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-bare-metal.md
new file mode 100644
index 0000000..6782313
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.7.1/deploy-bare-metal.md
@@ -0,0 +1,546 @@
+---
+id: deploy-bare-metal
+title: Deploy a cluster on bare metal
+sidebar_label: "Bare metal"
+original_id: deploy-bare-metal
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+
+> ### Tips
+>
+> 1. Single-cluster Pulsar installations should be sufficient for all but the 
most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using Pulsar in a startup or on a single team, it is simplest to 
opt for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+> see the guide [here](deploy-bare-metal-multi-cluster).
+>
+> 2. If you want to use all builtin [Pulsar IO](io-overview) connectors in 
your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
+> package and install `apache-pulsar-io-connectors` under `connectors` 
directory in the pulsar directory on every broker node or on every 
function-worker node if you
+> have run a separate cluster of function workers for [Pulsar 
Functions](functions-overview).
+>
+> 3. If you want to use [Tiered Storage](concepts-tiered-storage) feature in 
your Pulsar deployment, you need to download `apache-pulsar-offloaders`
+> package and install `apache-pulsar-offloaders` under `offloaders` directory 
in the pulsar directory on every broker node. For more details of how to 
configure
+> this feature, you can refer to the [Tiered storage 
cookbook](cookbooks-tiered-storage).
+
+Deploying a Pulsar cluster involves doing the following (in order):
+
+* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional)
+* Initialize [cluster metadata](#initialize-cluster-metadata)
+* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster
+* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers)
+
+## Preparation
+
+### Requirements
+
+Currently, Pulsar is available for 64-bit **macOS**, **Linux**, and 
**Windows**. To use Pulsar, you need to install 64-bit JRE/JDK 8 or later 
versions.
+
+> If you already have an existing ZooKeeper cluster and want to reuse it, you 
do not need to prepare the machines
+> for running ZooKeeper.
+
+To run Pulsar on bare metal, the following configuration is recommended:
+
+* At least 6 Linux machines or VMs
+  * 3 for running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 for running a Pulsar broker, and a 
[BookKeeper](https://bookkeeper.apache.org) bookie
+* A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name 
covering all of the Pulsar broker hosts
+
+> If you do not have enough machines, or to try out Pulsar in cluster mode 
(and expand the cluster later),
+> you can deploy a full Pulsar configuration on one node, where Zookeeper, the 
bookie and broker are run on the same machine.
+
+> If you do not have a DNS server, you can use the multi-host format in the 
service URL instead.
+
+Each machine in your cluster needs to have [Java 
8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or a 
more recent  version of Java installed.
+
+The following is a diagram showing the basic setup:
+
+![alt-text](/assets/pulsar-basic-setup.png)
+
+In this diagram, connecting clients need to be able to communicate with the 
Pulsar cluster using a single URL. In this case, `pulsar-cluster.acme.com` 
abstracts over all of the message-handling brokers. Pulsar message brokers run 
on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on 
ZooKeeper.
+
+### Hardware considerations
+
+When you deploy a Pulsar cluster, keep in mind the following basic better 
choices when you do the capacity planning.
+
+#### ZooKeeper
+
+For machines running ZooKeeper, is is recommended to use less powerful 
machines or VMs. Pulsar uses ZooKeeper only for periodic coordination-related 
and configuration-related tasks, *not* for basic operations. If you run Pulsar 
on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a 
[t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html)
 instance might likely suffice.
+
+#### Bookies and Brokers
+
+For machines running a bookie and a Pulsar broker, more powerful machines are 
required. For an AWS deployment, for example, 
[i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/)
 instances may be appropriate. On those machines you can use the following:
+
+* Fast CPUs and 10Gbps 
[NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar 
brokers)
+* Small and fast [solid-state 
drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk 
drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a 
[RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed 
write cache (for BookKeeper bookies)
+
+## Install the Pulsar binary package
+
+> You need to install the Pulsar binary package on *each machine in the 
cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) 
and [BookKeeper](#deploy-a-bookkeeper-cluster).
+
+To get started deploying a Pulsar cluster on bare metal, you need to download 
a binary tarball release in one of the following ways:
+
+* By clicking on the link below directly, which automatically triggers a 
download:
+  * <a href="pulsar:binary_release_url" download>Pulsar @pulsar:version@ 
binary release</a>
+* From the Pulsar [downloads page](pulsar:download_page_url)
+* From the Pulsar [releases 
page](https://github.com/apache/pulsar/releases/latest) on 
[GitHub](https://github.com)
+* Using [wget](https://www.gnu.org/software/wget):
+
+```bash
+
+$ wget pulsar:binary_release_url
+
+```
+
+Once you download the tarball, untar it and `cd` into the resulting directory:
+
+```bash
+
+$ tar xvzf apache-pulsar-@pulsar:[email protected]
+$ cd apache-pulsar-@pulsar:version@
+
+```
+
+The extracted directory contains the following subdirectories:
+
+Directory | Contains
+:---------|:--------
+`bin` |[command-line tools](reference-cli-tools) of Pulsar, such as 
[`pulsar`](reference-cli-tools.md#pulsar) and 
[`pulsar-admin`](https://pulsar.apache.org/tools/pulsar-admin/)
+`conf` | Configuration files for Pulsar, including for [broker 
configuration](reference-configuration.md#broker), [ZooKeeper 
configuration](reference-configuration.md#zookeeper), and more
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that 
Pulsar uses
+`logs` | Logs that the installation creates
+
+## [Install Builtin Connectors (optional)]( 
https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional)
+
+> Since Pulsar release `2.1.0-incubating`, Pulsar provides a separate binary 
distribution, containing all the `builtin` connectors.
+> If you want to enable those `builtin` connectors, you can follow the 
instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using builtin connectors, you need to download the connectors 
tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:connector_release_url" download>Pulsar IO Connectors 
@pulsar:version@ release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases 
page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:connector_release_url/{connector}-@pulsar:[email protected]
+  
+  ```
+
+Once you download the .nar file, copy the file to directory `connectors` in 
the pulsar directory. 
+For example, if you download the connector file 
`pulsar-io-aerospike-@pulsar:[email protected]`:
+
+```bash
+
+$ mkdir connectors
+$ mv pulsar-io-aerospike-@pulsar:[email protected] connectors
+
+$ ls connectors
+pulsar-io-aerospike-@pulsar:[email protected]
+...
+
+```
+
+## [Install Tiered Storage Offloaders 
(optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional)
+
+> Since Pulsar release `2.2.0`, Pulsar releases a separate binary 
distribution, containing the tiered storage offloaders.
+> If you want to enable tiered storage feature, you can follow the 
instructions as below; otherwise you can
+> skip this section for now.
+
+To get started using tiered storage offloaders, you need to download the 
offloaders tarball release on every broker node in one of the following ways:
+
+* by clicking the link below and downloading the release from an Apache mirror:
+
+  * <a href="pulsar:offloader_release_url" download>Pulsar Tiered Storage 
Offloaders @pulsar:version@ release</a>
+
+* from the Pulsar [downloads page](pulsar:download_page_url)
+* from the Pulsar [releases 
page](https://github.com/apache/pulsar/releases/latest)
+* using [wget](https://www.gnu.org/software/wget):
+
+  ```shell
+  
+  $ wget pulsar:offloader_release_url
+  
+  ```
+
+Once you download the tarball, in the pulsar directory, untar the offloaders 
package and copy the offloaders as `offloaders` in the pulsar directory:
+
+```bash
+
+$ tar xvfz apache-pulsar-offloaders-@pulsar:[email protected]
+
+// you can find a directory named `apache-pulsar-offloaders-@pulsar:version@` 
in the pulsar directory
+// then copy the offloaders
+
+$ mv apache-pulsar-offloaders-@pulsar:version@/offloaders offloaders
+
+$ ls offloaders
+tiered-storage-jcloud-@pulsar:[email protected]
+
+```
+
+For more details of how to configure tiered storage feature, you can refer to 
the [Tiered storage cookbook](cookbooks-tiered-storage)
+
+
+## Deploy a ZooKeeper cluster
+
+> If you already have an existing zookeeper cluster and want to use it, you 
can skip this section.
+
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential 
coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar 
cluster, you need to deploy ZooKeeper first (before all other components). A 
3-node ZooKeeper cluster is the recommended configuration. Pulsar does not make 
heavy use of ZooKeeper, so more lightweight machines or VMs should suffice for 
running ZooKeeper.
+
+To begin, add all ZooKeeper servers to the configuration specified in 
[`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar 
directory that you create [above](#install-the-pulsar-binary-package)). The 
following is an example:
+
+```properties
+
+server.1=zk1.us-west.example.com:2888:3888
+server.2=zk2.us-west.example.com:2888:3888
+server.3=zk3.us-west.example.com:2888:3888
+
+```
+
+> If you only have one machine on which to deploy Pulsar, you only need to add 
one server entry in the configuration file.
+
+On each host, you need to specify the ID of the node in the `myid` file, which 
is in the `data/zookeeper` folder of each server by default (you can change the 
file location via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) 
parameter).
+
+> See the [Multi-server setup 
guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup)
 in the ZooKeeper documentation for detailed information on `myid` and more.
+
+For example, on a ZooKeeper server like `zk1.us-west.example.com`, you can set 
the `myid` value as follows:
+
+```bash
+
+$ mkdir -p data/zookeeper
+$ echo 1 > data/zookeeper/myid
+
+```
+
+On `zk2.us-west.example.com`, the command is `echo 2 > data/zookeeper/myid` 
and so on.
+
+Once you add each server to the `zookeeper.conf` configuration and have the 
appropriate `myid` entry, you can start ZooKeeper on all hosts (in the 
background, using nohup) with the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start zookeeper
+
+```
+
+> If you plan to deploy Zookeeper with the Bookie on the same node, you
+> need to start zookeeper by using different stats port.
+
+Start zookeeper with [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) 
CLI tool like:
+
+```bash
+
+$ PULSAR_EXTRA_OPTS="-Dstats_server_port=8001" bin/pulsar-daemon start 
zookeeper
+
+```
+
+## Initialize cluster metadata
+
+Once you deploy ZooKeeper for your cluster, you need to write some metadata to 
ZooKeeper for each cluster in your instance. You only need to write this data 
**once**.
+
+You can initialize this metadata using the 
[`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata)
 command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This 
command can be run on any machine in your ZooKeeper cluster. The following is 
an example:
+
+```shell
+
+$ bin/pulsar initialize-cluster-metadata \
+  --cluster pulsar-cluster-1 \
+  --zookeeper zk1.us-west.example.com:2181 \
+  --configuration-store zk1.us-west.example.com:2181 \
+  --web-service-url http://pulsar.us-west.example.com:8080 \
+  --web-service-url-tls https://pulsar.us-west.example.com:8443 \
+  --broker-service-url pulsar://pulsar.us-west.example.com:6650 \
+  --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
+
+```
+
+As you can see from the example above, you will need to specify the following:
+
+Flag | Description
+:----|:-----------
+`--cluster` | A name for the cluster
+`--zookeeper` | A "local" ZooKeeper connection string for the cluster. This 
connection string only needs to include *one* machine in the ZooKeeper cluster.
+`--configuration-store` | The configuration store connection string for the 
entire instance. As with the `--zookeeper` flag, this connection string only 
needs to include *one* machine in the ZooKeeper cluster.
+`--web-service-url` | The web service URL for the cluster, plus a port. This 
URL should be a standard DNS name. The default port is 8080 (you had better not 
use a different port).
+`--web-service-url-tls` | If you use [TLS](security-tls-transport), you also 
need to specify a TLS web service URL for the cluster. The default port is 8443 
(you had better not use a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the 
brokers in the cluster. This URL should not use the same DNS name as the web 
service URL but should use the `pulsar` scheme instead. The default port is 
6650 (you had better not use a different port).
+`--broker-service-url-tls` | If you use [TLS](security-tls-transport), you 
also need to specify a TLS web service URL for the cluster as well as a TLS 
broker service URL for the brokers in the cluster. The default port is 6651 
(you had better not use a different port).
+
+
+> If you do not have a DNS server, you can use multi-host format in the 
service URL with the following settings:
+>
+
+> ```properties
+> 
+> --web-service-url http://host1:8080,host2:8080,host3:8080 \
+> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \
+> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \
+> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651
+>
+> 
+> ```
+
+>
+> If you want to use an existing BookKeeper cluster, you can add the 
`--existing-bk-metadata-service-uri` flag as follows:
+>
+
+> ```properties
+> 
+> --existing-bk-metadata-service-uri "zk+null://zk1:2181;zk2:2181/ledgers" \
+> --web-service-url http://host1:8080,host2:8080,host3:8080 \
+> --web-service-url-tls https://host1:8443,host2:8443,host3:8443 \
+> --broker-service-url pulsar://host1:6650,host2:6650,host3:6650 \
+> --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651
+>
+> 
+> ```
+
+> You can obtain the metadata service URI of the existing BookKeeper cluster 
by using the `bin/bookkeeper shell whatisinstanceid` command. You must enclose 
the value in double quotes since the multiple metadata service URIs are 
separated with semicolons.
+
+## Deploy a BookKeeper cluster
+
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data 
storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use 
Pulsar. You can choose to run a **3-bookie BookKeeper cluster**.
+
+You can configure BookKeeper bookies using the 
[`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration 
file. The most important step in configuring bookies for our purposes here is 
ensuring that [`zkServers`](reference-configuration.md#bookkeeper-zkServers) is 
set to the connection string for the ZooKeeper cluster. The following is an 
example:
+
+```properties
+
+zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+```
+
+Once you appropriately modify the `zkServers` parameter, you can make any 
other configuration changes that you require. You can find a full listing of 
the available BookKeeper configuration parameters 
[here](reference-configuration.md#bookkeeper). However, consulting the 
[BookKeeper 
documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for 
a more in-depth guide might be a better choice.
+
+Once you apply the desired configuration in `conf/bookkeeper.conf`, you can 
start up a bookie on each of your BookKeeper hosts. You can start up each 
bookie either in the background, using 
[nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+
+To start the bookie in the background, use the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start bookie
+
+```
+
+To start the bookie in the foreground:
+
+```bash
+
+$ bin/pulsar bookie
+
+```
+
+You can verify that a bookie works properly by running the `bookiesanity` 
command on the [BookKeeper shell](reference-cli-tools.md#shell):
+
+```bash
+
+$ bin/bookkeeper shell bookiesanity
+
+```
+
+This command creates an ephemeral BookKeeper ledger on the local bookie, 
writes a few entries, reads them back, and finally deletes the ledger.
+
+After you start all the bookies, you can use `simpletest` command for 
[BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to verify 
all the bookies in the cluster are up running.
+
+```bash
+
+$ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum 
<num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
+
+```
+
+This command creates a `num-bookies` sized ledger on the cluster, writes a few 
entries, and finally deletes the ledger.
+
+
+## Deploy Pulsar brokers
+
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. 
Brokers handle Pulsar messages and provide the administrative interface of 
Pulsar. A good choice is to run **3 brokers**, one for each machine that 
already runs a BookKeeper bookie.
+
+### Configure Brokers
+
+The most important element of broker configuration is ensuring that each 
broker is aware of the ZooKeeper cluster that you have deployed. Ensure that 
the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) 
and 
[`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers)
 parameters are correct. In this case, since you only have 1 cluster and no 
configuration store setup, the `configurationStoreServers` point to the same 
`zookeeperServers`.
+
+```properties
+
+zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
+
+```
+
+You also need to specify the cluster name (matching the name that you provided 
when you [initialize the metadata of the 
cluster](#initialize-cluster-metadata)):
+
+```properties
+
+clusterName=pulsar-cluster-1
+
+```
+
+In addition, you need to match the broker and web service ports provided when 
you initialize the metadata of the cluster (especially when you use a different 
port than the default):
+
+```properties
+
+brokerServicePort=6650
+brokerServicePortTls=6651
+webServicePort=8080
+webServicePortTls=8443
+
+```
+
+> If you deploy Pulsar in a one-node cluster, you should update the 
replication settings in `conf/broker.conf` to `1`.
+>
+
+> ```properties
+> 
+> # Number of bookies to use when creating a ledger
+> managedLedgerDefaultEnsembleSize=1
+>
+> # Number of copies to store for each message
+> managedLedgerDefaultWriteQuorum=1
+> 
+> # Number of guaranteed copies (acks to wait before write is complete)
+> managedLedgerDefaultAckQuorum=1
+>
+> 
+> ```
+
+### Enable Pulsar Functions (optional)
+
+If you want to enable [Pulsar Functions](functions-overview), you can follow 
the instructions as below:
+
+1. Edit `conf/broker.conf` to enable functions worker, by setting 
`functionsWorkerEnabled` to `true`.
+
+   ```conf
+   
+   functionsWorkerEnabled=true
+   
+   ```
+
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the 
cluster name that you provide when you [initialize the metadata of the 
cluster](#initialize-cluster-metadata). 
+
+   ```conf
+   
+   pulsarFunctionsCluster: pulsar-cluster-1
+   
+   ```
+
+If you want to learn more options about deploying the functions worker, check 
out [Deploy and manage functions worker](functions-worker).
+
+### Start Brokers
+
+You can then provide any other configuration changes that you want in the 
[`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide 
on a configuration, you can start up the brokers for your Pulsar cluster. Like 
ZooKeeper and BookKeeper, you can start brokers either in the foreground or in 
the background, using nohup.
+
+You can start a broker in the foreground using the [`pulsar 
broker`](reference-cli-tools.md#pulsar-broker) command:
+
+```bash
+
+$ bin/pulsar broker
+
+```
+
+You can start a broker in the background using the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+
+```bash
+
+$ bin/pulsar-daemon start broker
+
+```
+
+Once you successfully start up all the brokers that you intend to use, your 
Pulsar cluster should be ready to go!
+
+## Connect to the running cluster
+
+Once your Pulsar cluster is up and running, you should be able to connect with 
it using Pulsar clients. One such client is the 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included 
with the Pulsar binary package. The `pulsar-client` tool can publish messages 
to and consume messages from Pulsar topics and thus provide a simple way to 
make sure that your cluster runs properly.
+
+To use the `pulsar-client` tool, first modify the client configuration file in 
[`conf/client.conf`](reference-configuration.md#client) in your binary package. 
You need to change the values for `webServiceUrl` and `brokerServiceUrl`, 
substituting `localhost` (which is the default), with the DNS name that you 
assign to your broker/bookie hosts. The following is an example:
+
+```properties
+
+webServiceUrl=http://us-west.example.com:8080
+brokerServiceurl=pulsar://us-west.example.com:6650
+
+```
+
+> If you do not have a DNS server, you can specify multi-host in service URL 
as follows:
+>
+
+> ```properties
+> 
+> webServiceUrl=http://host1:8080,host2:8080,host3:8080
+> brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650
+>
+> 
+> ```
+
+Once that is complete, you can publish a message to the Pulsar topic:
+
+```bash
+
+$ bin/pulsar-client produce \
+  persistent://public/default/test \
+  -n 1 \
+  -m "Hello Pulsar"
+
+```
+
+> You may need to use a different cluster name in the topic if you specify a 
cluster name other than `pulsar-cluster-1`.
+
+This command publishes a single message to the Pulsar topic. In addition, you 
can subscribe to the Pulsar topic in a different terminal before publishing 
messages as below:
+
+```bash
+
+$ bin/pulsar-client consume \
+  persistent://public/default/test \
+  -n 100 \
+  -s "consumer-test" \
+  -t "Exclusive"
+
+```
+
+Once you successfully publish the above message to the topic, you should see 
it in the standard output:
+
+```bash
+
+----- got message -----
+Hello Pulsar
+
+```
+
+## Run Functions
+
+> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, 
you can try out the Pulsar Functions now.
+
+Create an ExclamationFunction `exclamation`.
+
+```bash
+
+bin/pulsar-admin functions create \
+  --jar examples/api-examples.jar \
+  --classname org.apache.pulsar.functions.api.examples.ExclamationFunction \
+  --inputs persistent://public/default/exclamation-input \
+  --output persistent://public/default/exclamation-output \
+  --tenant public \
+  --namespace default \
+  --name exclamation
+
+```
+
+Check whether the function runs as expected by 
[triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+
+```bash
+
+bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello 
world"
+
+```
+
+You should see the following output:
+
+```shell
+
+hello world!
+
+```
+
diff --git a/site2/website-next/versioned_docs/version-2.7.1/deploy-dcos.md 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-dcos.md
new file mode 100644
index 0000000..14a3635
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.7.1/deploy-dcos.md
@@ -0,0 +1,202 @@
+---
+id: deploy-dcos
+title: Deploy Pulsar on DC/OS
+sidebar_label: "DC/OS"
+original_id: deploy-dcos
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+> ### Tips
+>
+> If you want to enable all builtin [Pulsar IO](io-overview) connectors in 
your Pulsar deployment, you can choose to use `apachepulsar/pulsar-all` image 
instead of
+> `apachepulsar/pulsar` image. `apachepulsar/pulsar-all` image has already 
bundled [all builtin connectors](io-overview.md#working-with-connectors).
+
+[DC/OS](https://dcos.io/) (the <strong>D</strong>ata<strong>C</strong>enter 
<strong>O</strong>perating <strong>S</strong>ystem) is a distributed operating 
system used for deploying and managing applications and systems on [Apache 
Mesos](http://mesos.apache.org/). DC/OS is an open-source tool that 
[Mesosphere](https://mesosphere.com/) creates and maintains .
+
+Apache Pulsar is available as a [Marathon Application 
Group](https://mesosphere.github.io/marathon/docs/application-groups.html), 
which runs multiple applications as manageable sets.
+
+## Prerequisites
+
+In order to run Pulsar on DC/OS, you need the following:
+
+* DC/OS version [1.9](https://docs.mesosphere.com/1.9/) or higher
+* A [DC/OS cluster](https://docs.mesosphere.com/1.9/installing/) with at least 
three agent nodes
+* The [DC/OS CLI tool](https://docs.mesosphere.com/1.9/cli/install/) installed
+* The 
[`PulsarGroups.json`](https://github.com/apache/pulsar/blob/master/deployment/dcos/PulsarGroups.json)
 configuration file from the Pulsar GitHub repo.
+
+  ```bash
+  
+  $ curl -O 
https://raw.githubusercontent.com/apache/pulsar/master/deployment/dcos/PulsarGroups.json
+  
+  ```
+
+Each node in the DC/OS-managed Mesos cluster must have at least:
+
+* 4 CPU
+* 4 GB of memory
+* 60 GB of total persistent disk
+
+Alternatively, you can change the configuration in `PulsarGroups.json` 
according to match your resources of DC/OS cluster.
+
+## Deploy Pulsar using the DC/OS command interface
+
+You can deploy Pulsar on DC/OS using this command:
+
+```bash
+
+$ dcos marathon group add PulsarGroups.json
+
+```
+
+This command deploys Docker container instances in three groups, which 
together comprise a Pulsar cluster:
+
+* 3 bookies (1 [bookie](reference-terminology.md#bookie) on each agent node 
and 1 [bookie 
recovery](http://bookkeeper.apache.org/docs/latest/admin/autorecovery/) 
instance)
+* 3 Pulsar [brokers](reference-terminology.md#broker) (1 broker on each node 
and 1 admin instance)
+* 1 [Prometheus](http://prometheus.io/) instance and 1 
[Grafana](https://grafana.com/) instance
+
+
+> When you run DC/OS, a ZooKeeper cluster already runs at `master.mesos:2181`, 
thus you do not have to install or start up ZooKeeper separately.
+
+After executing the `dcos` command above, click on the **Services** tab in the 
DC/OS [GUI interface](https://docs.mesosphere.com/latest/gui/), which you can 
access at [http://m1.dcos](http://m1.dcos) in this example. You should see 
several applications in the process of deploying.
+
+![DC/OS command executed](/assets/dcos_command_execute.png)
+
+![DC/OS command executed2](/assets/dcos_command_execute2.png)
+
+## The BookKeeper group
+
+To monitor the status of the BookKeeper cluster deployment, click on the 
**bookkeeper** group in the parent **pulsar** group.
+
+![DC/OS bookkeeper status](/assets/dcos_bookkeeper_status.png)
+
+At this point, 3 [bookies](reference-terminology.md#bookie) should be shown as 
green, which means that the bookies have been deployed successfully and are now 
running.
+ 
+![DC/OS bookkeeper running](/assets/dcos_bookkeeper_run.png)
+ 
+You can also click into each bookie instance to get more detailed information, 
such as the bookie running log.
+
+![DC/OS bookie log](/assets/dcos_bookie_log.png)
+
+To display information about the BookKeeper in ZooKeeper, you can visit 
[http://m1.dcos/exhibitor](http://m1.dcos/exhibitor). In this example, 3 
bookies are under the `available` directory.
+
+![DC/OS bookkeeper in zk](/assets/dcos_bookkeeper_in_zookeeper.png)
+
+## The Pulsar broker Group
+
+Similar to the BookKeeper group above, click into the **brokers** to check the 
status of the Pulsar brokers.
+
+![DC/OS broker status](/assets/dcos_broker_status.png)
+
+![DC/OS broker running](/assets/dcos_broker_run.png)
+
+You can also click into each broker instance to get more detailed information, 
such as the broker running log.
+
+![DC/OS broker log](/assets/dcos_broker_log.png)
+
+Broker cluster information in Zookeeper is also available through the web UI. 
In this example, you can see that the `loadbalance` and `managed-ledgers` 
directories have been created.
+
+![DC/OS broker in zk](/assets/dcos_broker_in_zookeeper.png)
+
+## Monitor Group
+
+The **monitory** group consists of Prometheus and Grafana.
+
+![DC/OS monitor status](/assets/dcos_monitor_status.png)
+
+### Prometheus
+
+Click into the instance of `prom` to get the endpoint of Prometheus, which is 
`192.168.65.121:9090` in this example.
+
+![DC/OS prom endpoint](/assets/dcos_prom_endpoint.png)
+
+If you click that endpoint, you can see the Prometheus dashboard. The 
[http://192.168.65.121:9090/targets](http://192.168.65.121:9090/targets) URL 
display all the bookies and brokers.
+
+![DC/OS prom targets](/assets/dcos_prom_targets.png)
+
+### Grafana
+
+Click into `grafana` to get the endpoint for Grafana, which is 
`192.168.65.121:3000` in this example.
+ 
+![DC/OS grafana endpoint](/assets/dcos_grafana_endpoint.png)
+
+If you click that endpoint, you can access the Grafana dashboard.
+
+![DC/OS grafana targets](/assets/dcos_grafana_dashboard.png)
+
+## Run a simple Pulsar consumer and producer on DC/OS
+
+Now that you have a fully deployed Pulsar cluster, you can run a simple 
consumer and producer to show Pulsar on DC/OS in action.
+
+### Download and prepare the Pulsar Java tutorial
+
+You can clone a [Pulsar Java 
tutorial](https://github.com/streamlio/pulsar-java-tutorial) repo. This repo 
contains a simple Pulsar consumer and producer (you can find more information 
in the `README` file of the repo).
+
+```bash
+
+$ git clone https://github.com/streamlio/pulsar-java-tutorial
+
+```
+
+Change the `SERVICE_URL` from `pulsar://localhost:6650` to 
`pulsar://a1.dcos:6650` in both 
[`ConsumerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ConsumerTutorial.java)
 and 
[`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java).
+The `pulsar://a1.dcos:6650` endpoint is for the broker service. You can fetch 
the endpoint details for each broker instance from the DC/OS GUI. `a1.dcos` is 
a DC/OS client agent, which runs a broker. The client agent IP address can also 
replace this.
+
+Now, change the message number from 10 to 10000000 in main method of 
[`ProducerTutorial.java`](https://github.com/streamlio/pulsar-java-tutorial/blob/master/src/main/java/tutorial/ProducerTutorial.java)
 so that it can produce more messages.
+
+Now compile the project code using the command below:
+
+```bash
+
+$ mvn clean package
+
+```
+
+### Run the consumer and producer
+
+Execute this command to run the consumer:
+
+```bash
+
+$ mvn exec:java -Dexec.mainClass="tutorial.ConsumerTutorial"
+
+```
+
+Execute this command to run the producer:
+
+```bash
+
+$ mvn exec:java -Dexec.mainClass="tutorial.ProducerTutorial"
+
+```
+
+You can see the producer producing messages and the consumer consuming 
messages through the DC/OS GUI.
+
+![DC/OS pulsar producer](/assets/dcos_producer.png)
+
+![DC/OS pulsar consumer](/assets/dcos_consumer.png)
+
+### View Grafana metric output
+
+While the producer and consumer run, you can access running metrics 
information from Grafana.
+
+![DC/OS pulsar dashboard](/assets/dcos_metrics.png)
+
+
+## Uninstall Pulsar
+
+You can shut down and uninstall the `pulsar` application from DC/OS at any 
time in the following two ways:
+
+1. Using the DC/OS GUI, you can choose **Delete** at the right end of Pulsar 
group.
+
+   ![DC/OS pulsar uninstall](/assets/dcos_uninstall.png)
+
+2. You can use the following command:
+
+   ```bash
+   
+   $ dcos marathon group remove /pulsar
+   
+   ```
+
diff --git a/site2/website-next/versioned_docs/version-2.7.1/deploy-docker.md 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-docker.md
new file mode 100644
index 0000000..f76318f
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.7.1/deploy-docker.md
@@ -0,0 +1,64 @@
+---
+id: deploy-docker
+title: Deploy a cluster on Docker
+sidebar_label: "Docker"
+original_id: deploy-docker
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+To deploy a Pulsar cluster on Docker, complete the following steps:
+1. Deploy a ZooKeeper cluster (optional)
+2. Initialize cluster metadata
+3. Deploy a BookKeeper cluster
+4. Deploy one or more Pulsar brokers
+
+## Prepare
+
+To run Pulsar on Docker, you need to create a container for each Pulsar 
component: ZooKeeper, BookKeeper and broker. You can pull the images of 
ZooKeeper and BookKeeper separately on [Docker Hub](https://hub.docker.com/), 
and pull a [Pulsar 
image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) for the broker. 
You can also pull only one [Pulsar 
image](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) and create three 
containers with this image. This tutorial takes the second  [...]
+
+### Pull a Pulsar image
+You can pull a Pulsar image from [Docker 
Hub](https://hub.docker.com/r/apachepulsar/pulsar-all/tags) with the following 
command.
+
+```
+
+docker pull apachepulsar/pulsar-all:latest
+
+```
+
+### Create three containers
+Create containers for ZooKeeper, BookKeeper and broker. In this example, they 
are named as `zookeeper`, `bookkeeper` and `broker` respectively. You can name 
them as you want with the `--name` flag. By default, the container names are 
created randomly.
+
+```
+
+docker run -it --name bookkeeper apachepulsar/pulsar-all:latest /bin/bash
+docker run -it --name zookeeper apachepulsar/pulsar-all:latest /bin/bash
+docker run -it --name broker apachepulsar/pulsar-all:latest /bin/bash
+
+```
+
+### Create a network
+To deploy a Pulsar cluster on Docker, you need to create a `network` and 
connect the containers of ZooKeeper, BookKeeper and broker to this network. The 
following command creates the network `pulsar`:
+
+```
+
+docker network create pulsar
+
+```
+
+### Connect containers to network
+Connect the containers of ZooKeeper, BookKeeper and broker to the `pulsar` 
network with the following commands. 
+
+```
+
+docker network connect pulsar zookeeper
+docker network connect pulsar bookkeeper
+docker network connect pulsar broker
+
+```
+
+To check whether the containers are successfully connected to the network, 
enter the `docker network inspect pulsar` command.
+
+For detailed information about how to deploy ZooKeeper cluster, BookKeeper 
cluster, brokers, see [deploy a cluster on bare metal](deploy-bare-metal).
diff --git 
a/site2/website-next/versioned_docs/version-2.7.1/deploy-kubernetes.md 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-kubernetes.md
new file mode 100644
index 0000000..f8f4500
--- /dev/null
+++ b/site2/website-next/versioned_docs/version-2.7.1/deploy-kubernetes.md
@@ -0,0 +1,15 @@
+---
+id: deploy-kubernetes
+title: Deploy Pulsar on Kubernetes
+sidebar_label: "Kubernetes"
+original_id: deploy-kubernetes
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+
+To get up and running with these charts as fast as possible, in a 
**non-production** use case, we provide
+a [quick start guide](getting-started-helm) for Proof of Concept (PoC) 
deployments.
+
+To configure and install a Pulsar cluster on Kubernetes for production usage, 
follow the complete [Installation Guide](helm-install).
\ No newline at end of file
diff --git 
a/site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md 
b/site2/website-next/versioned_docs/version-2.7.1/deploy-monitoring.md
similarity index 95%
copy from site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md
copy to site2/website-next/versioned_docs/version-2.7.1/deploy-monitoring.md
index 6923caa..d6bdd5c 100644
--- a/site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md
+++ b/site2/website-next/versioned_docs/version-2.7.1/deploy-monitoring.md
@@ -121,5 +121,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git 
a/site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md 
b/site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md
index 6923caa..d6bdd5c 100644
--- a/site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md
+++ b/site2/website-next/versioned_docs/version-2.7.3/deploy-monitoring.md
@@ -121,5 +121,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git 
a/site2/website-next/versioned_docs/version-2.8.0/deploy-monitoring.md 
b/site2/website-next/versioned_docs/version-2.8.0/deploy-monitoring.md
index eece781..2171dc2 100644
--- a/site2/website-next/versioned_docs/version-2.8.0/deploy-monitoring.md
+++ b/site2/website-next/versioned_docs/version-2.8.0/deploy-monitoring.md
@@ -148,5 +148,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git a/site2/website-next/versioned_sidebars/version-2.7.1-sidebars.json 
b/site2/website-next/versioned_sidebars/version-2.7.1-sidebars.json
index 2900041..5244cdc 100644
--- a/site2/website-next/versioned_sidebars/version-2.7.1-sidebars.json
+++ b/site2/website-next/versioned_sidebars/version-2.7.1-sidebars.json
@@ -261,6 +261,40 @@
           "id": "version-2.7.1/helm-tools"
         }
       ]
+    },
+    {
+      "type": "category",
+      "label": "Deployment",
+      "items": [
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-aws"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-kubernetes"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-bare-metal"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-bare-metal-multi-cluster"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-dcos"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-docker"
+        },
+        {
+          "type": "doc",
+          "id": "version-2.7.1/deploy-monitoring"
+        }
+      ]
     }
   ]
 }
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.0/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.7.0/deploy-monitoring.md
index a7649ed..b4d29dc 100644
--- a/site2/website/versioned_docs/version-2.7.0/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.7.0/deploy-monitoring.md
@@ -91,5 +91,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
\ No newline at end of file
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
\ No newline at end of file
diff --git a/site2/website/versioned_docs/version-2.7.1/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.7.1/deploy-monitoring.md
index 5588a95..0673c01 100644
--- a/site2/website/versioned_docs/version-2.7.1/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.7.1/deploy-monitoring.md
@@ -103,5 +103,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git a/site2/website/versioned_docs/version-2.7.2/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.7.2/deploy-monitoring.md
index 8c03599..bb1d3c0 100644
--- a/site2/website/versioned_docs/version-2.7.2/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.7.2/deploy-monitoring.md
@@ -104,5 +104,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git a/site2/website/versioned_docs/version-2.7.3/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.7.3/deploy-monitoring.md
index 50b0805..b30c312 100644
--- a/site2/website/versioned_docs/version-2.7.3/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.7.3/deploy-monitoring.md
@@ -103,5 +103,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git a/site2/website/versioned_docs/version-2.8.0/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.8.0/deploy-monitoring.md
index 586bcc0..d210332 100644
--- a/site2/website/versioned_docs/version-2.8.0/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.8.0/deploy-monitoring.md
@@ -124,5 +124,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git a/site2/website/versioned_docs/version-2.8.1/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.8.1/deploy-monitoring.md
index 1eaf947..9e58e8c 100644
--- a/site2/website/versioned_docs/version-2.8.1/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.8.1/deploy-monitoring.md
@@ -124,5 +124,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
diff --git a/site2/website/versioned_docs/version-2.8.2/deploy-monitoring.md 
b/site2/website/versioned_docs/version-2.8.2/deploy-monitoring.md
index 17f1986..95bfe6e 100644
--- a/site2/website/versioned_docs/version-2.8.2/deploy-monitoring.md
+++ b/site2/website/versioned_docs/version-2.8.2/deploy-monitoring.md
@@ -124,5 +124,5 @@ The following are some Grafana dashboards examples:
 - 
[pulsar-grafana](http://pulsar.apache.org/docs/en/deploy-monitoring/#grafana): 
a Grafana dashboard that displays metrics collected in Prometheus for Pulsar 
clusters running on Kubernetes.
 - 
[apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard):
 a collection of Grafana dashboard templates for different Pulsar components 
running on both Kubernetes and on-premise machines.
 
- ## Alerting rules
- You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).
+## Alerting rules
+You can set alerting rules according to your Pulsar environment. To configure 
alerting rules for Apache Pulsar, refer to [alerting 
rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/).

Reply via email to