This is an automated email from the ASF dual-hosted git repository.

sijie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/pulsar.git


The following commit(s) were added to refs/heads/master by this push:
     new 6df8d40  [doc] Improve Pulsar deployment bare metal (#5111)
6df8d40 is described below

commit 6df8d400e9a0b2977f858b5ecfb9f4f9a838d0f8
Author: Monica-zy <44013755+monica...@users.noreply.github.com>
AuthorDate: Tue Sep 17 22:00:14 2019 +0800

    [doc] Improve Pulsar deployment bare metal (#5111)
    
    Motivation
    
    Improve the language and the overall descriptive style of the Pulsar 
Security document (deploy-bare-metal section): 
http://pulsar.apache.org/docs/en/next/deploy-bare-metal/
    
    Modifications
    
    Adjust the tone, personal pronouns, voice also some typo errors of some 
sentences in the document.
---
 site2/docs/deploy-bare-metal.md | 203 ++++++++++++++++++++--------------------
 1 file changed, 101 insertions(+), 102 deletions(-)

diff --git a/site2/docs/deploy-bare-metal.md b/site2/docs/deploy-bare-metal.md
index 89d7434..d168254 100644
--- a/site2/docs/deploy-bare-metal.md
+++ b/site2/docs/deploy-bare-metal.md
@@ -1,80 +1,80 @@
 ---
 id: deploy-bare-metal
-title: Deploying a cluster on bare metal
+title: Deploy a cluster on bare metal
 sidebar_label: Bare metal
 ---
 
 
 > ### Tips
 >
-> 1. Single-cluster Pulsar installations should be sufficient for all but the 
most ambitious use cases. If you're interested in experimenting with
-> Pulsar or using it in a startup or on a single team, we recommend opting for 
a single cluster. If you do need to run a multi-cluster Pulsar instance,
-> however, see the guide [here](deploy-bare-metal-multi-cluster.md).
+> 1. Single-cluster Pulsar installations should be sufficient for all but the 
most ambitious use cases. If you are interested in experimenting with
+> Pulsar or using Pulsar in a startup or on a single team, you had better opt 
for a single cluster. If you do need to run a multi-cluster Pulsar instance,
+> see the guide [here](deploy-bare-metal-multi-cluster.md).
 >
 > 2. If you want to use all builtin [Pulsar IO](io-overview.md) connectors in 
 > your Pulsar deployment, you need to download `apache-pulsar-io-connectors`
-> package and make sure it is installed under `connectors` directory in the 
pulsar directory on every broker node or on every function-worker node if you
+> package and install `apache-pulsar-io-connectors` under `connectors` 
directory in the pulsar directory on every broker node or on every 
function-worker node if you
 > have run a separate cluster of function workers for [Pulsar 
 > Functions](functions-overview.md).
 >
 > 3. If you want to use [Tiered Storage](concepts-tiered-storage.md) feature 
 > in your Pulsar deployment, you need to download `apache-pulsar-offloaders`
-> package and make sure it is installed under `offloaders` directory in the 
pulsar directory on every broker node. For more details of how to configure
-> this feature, you could reference this [Tiered storage 
cookbook](cookbooks-tiered-storage.md).
+> package and install `apache-pulsar-offloaders` under `offloaders` directory 
in the pulsar directory on every broker node. For more details of how to 
configure
+> this feature, you can refer to the [Tiered storage 
cookbook](cookbooks-tiered-storage.md).
 
 Deploying a Pulsar cluster involves doing the following (in order):
 
-* Deploying a [ZooKeeper](#deploying-a-zookeeper-cluster) cluster (optional)
-* Initializing [cluster metadata](#initializing-cluster-metadata)
-* Deploying a [BookKeeper](#deploying-a-bookkeeper-cluster) cluster
-* Deploying one or more Pulsar [brokers](#deploying-pulsar-brokers)
+* Deploy a [ZooKeeper](#deploy-a-zookeeper-cluster) cluster (optional)
+* Initialize [cluster metadata](#initialize-cluster-metadata)
+* Deploy a [BookKeeper](#deploy-a-bookkeeper-cluster) cluster
+* Deploy one or more Pulsar [brokers](#deploy-pulsar-brokers)
 
 ## Preparation
 
 ### Requirements
 
-> If you already have an existing zookeeper cluster and would like to reuse 
it, you don't need to prepare the machines
+> If you already have an existing zookeeper cluster and want to reuse it, you 
do not need to prepare the machines
 > for running ZooKeeper.
 
-To run Pulsar on bare metal, you are recommended to have:
+To run Pulsar on bare metal, you had better have the following:
 
 * At least 6 Linux machines or VMs
-  * 3 running [ZooKeeper](https://zookeeper.apache.org)
-  * 3 running a Pulsar broker, and a 
[BookKeeper](https://bookkeeper.apache.org) bookie
+  * 3 for running [ZooKeeper](https://zookeeper.apache.org)
+  * 3 for running a Pulsar broker, and a 
[BookKeeper](https://bookkeeper.apache.org) bookie
 * A single [DNS](https://en.wikipedia.org/wiki/Domain_Name_System) name 
covering all of the Pulsar broker hosts
 
-> However if you don't have enough machines, or are trying out Pulsar in 
cluster mode (and expand the cluster later),
-> you can even deploy Pulsar in one node, where it will run zookeeper, bookie 
and broker in same machine.
+> If you do not have enough machines, or try out Pulsar in cluster mode (and 
expand the cluster later),
+> you can even deploy Pulsar in one node, where Zookeeper, bookie and broker 
are run in the same machine.
 
-> If you don't have a DNS server, you can use multi-host in service URL 
instead.
+> If you do not have a DNS server, you can use multi-host in service URL 
instead.
 
-Each machine in your cluster will need to have [Java 
8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or 
higher installed.
+Each machine in your cluster needs to have [Java 
8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or 
higher version of Java installed.
 
-Here's a diagram showing the basic setup:
+The following is a diagram showing the basic setup:
 
 ![alt-text](assets/pulsar-basic-setup.png)
 
-In this diagram, connecting clients need to be able to communicate with the 
Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com`, that 
abstracts over all of the message-handling brokers. Pulsar message brokers run 
on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on 
ZooKeeper.
+In this diagram, connecting clients need to be able to communicate with the 
Pulsar cluster using a single URL, in this case `pulsar-cluster.acme.com` 
abstracts over all of the message-handling brokers. Pulsar message brokers run 
on machines alongside BookKeeper bookies; brokers and bookies, in turn, rely on 
ZooKeeper.
 
 ### Hardware considerations
 
-When deploying a Pulsar cluster, we have some basic recommendations that you 
should keep in mind when capacity planning.
+When you deploy a Pulsar cluster, keep in mind the following basic better 
choices when you do the capacity planning.
 
 #### ZooKeeper
 
-For machines running ZooKeeper, we recommend using lighter-weight machines or 
VMs. Pulsar uses ZooKeeper only for periodic coordination- and 
configuration-related tasks, *not* for basic operations. If you're running 
Pulsar on [Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a 
[t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html)
 instance would likely suffice.
+For machines running ZooKeeper, you had better use lighter-weight machines or 
VMs. Pulsar uses ZooKeeper only for periodic coordination-related and 
configuration-related tasks, *not* for basic operations. If you run Pulsar on 
[Amazon Web Services](https://aws.amazon.com/) (AWS), for example, a 
[t2.small](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html)
 instance might likely suffice.
 
-#### Bookies & Brokers
+#### Bookies and Brokers
 
-For machines running a bookie and a Pulsar broker, we recommend using more 
powerful machines. For an AWS deployment, for example, 
[i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/)
 instances may be appropriate. On those machines we also recommend:
+For machines running a bookie and a Pulsar broker, you had better use more 
powerful machines. For an AWS deployment, for example, 
[i3.4xlarge](https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/)
 instances may be appropriate. On those machines you can use the following:
 
 * Fast CPUs and 10Gbps 
[NIC](https://en.wikipedia.org/wiki/Network_interface_controller) (for Pulsar 
brokers)
 * Small and fast [solid-state 
drives](https://en.wikipedia.org/wiki/Solid-state_drive) (SSDs) or [hard disk 
drives](https://en.wikipedia.org/wiki/Hard_disk_drive) (HDDs) with a 
[RAID](https://en.wikipedia.org/wiki/RAID) controller and a battery-backed 
write cache (for BookKeeper bookies)
 
-## Installing the Pulsar binary package
+## Install the Pulsar binary package
 
-> You'll need to install the Pulsar binary package on *each machine in the 
cluster*, including machines running 
[ZooKeeper](#deploying-a-zookeeper-cluster) and 
[BookKeeper](#deploying-a-bookkeeper-cluster).
+> You need to install the Pulsar binary package on *each machine in the 
cluster*, including machines running [ZooKeeper](#deploy-a-zookeeper-cluster) 
and [BookKeeper](#deploy-a-bookkeeper-cluster).
 
-To get started deploying a Pulsar cluster on bare metal, you'll need to 
download a binary tarball release in one of the following ways:
+To get started deploying a Pulsar cluster on bare metal, you need to download 
a binary tarball release in one of the following ways:
 
-* By clicking on the link directly below, which will automatically trigger a 
download:
+* By clicking on the link below directly, which automatically triggers a 
download:
   * <a href="pulsar:binary_release_url" download>Pulsar {{pulsar:version}} 
binary release</a>
 * From the Pulsar [downloads page](pulsar:download_page_url)
 * From the Pulsar [releases 
page](https://github.com/apache/pulsar/releases/latest) on 
[GitHub](https://github.com)
@@ -84,7 +84,7 @@ To get started deploying a Pulsar cluster on bare metal, 
you'll need to download
 $ wget pulsar:binary_release_url
 ```
 
-Once you've downloaded the tarball, untar it and `cd` into the resulting 
directory:
+Once you download the tarball, untar it and `cd` into the resulting directory:
 
 ```bash
 $ tar xvzf apache-pulsar-{{pulsar:version}}-bin.tar.gz
@@ -95,20 +95,19 @@ The untarred directory contains the following 
subdirectories:
 
 Directory | Contains
 :---------|:--------
-`bin` | Pulsar's [command-line tools](reference-cli-tools.md), such as 
[`pulsar`](reference-cli-tools.md#pulsar) and 
[`pulsar-admin`](reference-pulsar-admin.md)
+`bin` |[command-line tools](reference-cli-tools.md) of Pulsar, such as 
[`pulsar`](reference-cli-tools.md#pulsar) and 
[`pulsar-admin`](reference-pulsar-admin.md)
 `conf` | Configuration files for Pulsar, including for [broker 
configuration](reference-configuration.md#broker), [ZooKeeper 
configuration](reference-configuration.md#zookeeper), and more
-`data` | The data storage directory used by ZooKeeper and BookKeeper.
-`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files used 
by Pulsar.
-`logs` | Logs created by the installation.
+`data` | The data storage directory that ZooKeeper and BookKeeper use
+`lib` | The [JAR](https://en.wikipedia.org/wiki/JAR_(file_format)) files that 
Pulsar uses
+`logs` | Logs that the installation creates
 
-## Installing Builtin Connectors (optional)
+## [Install Builtin Connectors (optional)]( 
https://pulsar.apache.org/docs/en/next/standalone/#install-builtin-connectors-optional)
 
-> Since release `2.1.0-incubating`, Pulsar releases a separate binary 
distribution, containing all the `builtin` connectors.
-> If you would like to enable those `builtin` connectors, you can follow the 
instructions as below; otherwise you can
+> Since Pulsar releases `2.1.0-incubating`, Pulsar releases a separate binary 
distribution, containing all the `builtin` connectors.
+> If you want to enable those `builtin` connectors, you can follow the 
instructions as below; otherwise you can
 > skip this section for now.
 
-To get started using builtin connectors, you'll need to download the 
connectors tarball release on every broker node in
-one of the following ways:
+To get started using builtin connectors, you need to download the connectors 
tarball release on every broker node in one of the following ways:
 
 * by clicking the link below and downloading the release from an Apache mirror:
 
@@ -122,8 +121,8 @@ one of the following ways:
   $ wget pulsar:connector_release_url/{connector}-{{pulsar:version}}.nar
   ```
 
-Once the nar file is downloaded, copy the file to directory `connectors` in 
the pulsar directory, 
-for example, if the connector file 
`pulsar-io-aerospike-{{pulsar:version}}.nar` is downloaded:
+Once you download the nar file, copy the file to directory `connectors` in the 
pulsar directory, 
+for example, if you download the connector file 
`pulsar-io-aerospike-{{pulsar:version}}.nar`:
 
 ```bash
 $ mkdir connectors
@@ -134,14 +133,13 @@ pulsar-io-aerospike-{{pulsar:version}}.nar
 ...
 ```
 
-## Installing Tiered Storage Offloaders (optional)
+## [Install Tiered Storage Offloaders 
(optional)](https://pulsar.apache.org/docs/en/next/standalone/#install-tiered-storage-offloaders-optional)
 
-> Since release `2.2.0`, Pulsar releases a separate binary distribution, 
containing the tiered storage offloaders.
-> If you would like to enable tiered storage feature, you can follow the 
instructions as below; otherwise you can
+> Since Pulsar release `2.2.0`, Pulsar releases a separate binary 
distribution, containing the tiered storage offloaders.
+> If you want to enable tiered storage feature, you can follow the 
instructions as below; otherwise you can
 > skip this section for now.
 
-To get started using tiered storage offloaders, you'll need to download the 
offloaders tarball release on every broker node in
-one of the following ways:
+To get started using tiered storage offloaders, you need to download the 
offloaders tarball release on every broker node in one of the following ways:
 
 * by clicking the link below and downloading the release from an Apache mirror:
 
@@ -155,13 +153,12 @@ one of the following ways:
   $ wget pulsar:offloader_release_url
   ```
 
-Once the tarball is downloaded, in the pulsar directory, untar the offloaders 
package and copy the offloaders as `offloaders`
-in the pulsar directory:
+Once you download the tarball, in the pulsar directory, untar the offloaders 
package and copy the offloaders as `offloaders` in the pulsar directory:
 
 ```bash
 $ tar xvfz apache-pulsar-offloaders-{{pulsar:version}}-bin.tar.gz
 
-// you will find a directory named 
`apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
+// you can find a directory named 
`apache-pulsar-offloaders-{{pulsar:version}}` in the pulsar directory
 // then copy the offloaders
 
 $ mv apache-pulsar-offloaders-{{pulsar:version}}/offloaders offloaders
@@ -170,16 +167,16 @@ $ ls offloaders
 tiered-storage-jcloud-{{pulsar:version}}.nar
 ```
 
-For more details of how to configure tiered storage feature, you could 
reference this [Tiered storage cookbook](cookbooks-tiered-storage.md)
+For more details of how to configure tiered storage feature, you can refer to 
the [Tiered storage cookbook](cookbooks-tiered-storage.md)
 
 
-## Deploying a ZooKeeper cluster
+## Deploy a ZooKeeper cluster
 
-> If you already have an exsiting zookeeper cluster and would like to use it, 
you can skip this section.
+> If you already have an exsiting zookeeper cluster and want to use it, you 
can skip this section.
 
-[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential 
coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar 
cluster you'll need to deploy ZooKeeper first (before all other components). We 
recommend deploying a 3-node ZooKeeper cluster. Pulsar does not make heavy use 
of ZooKeeper, so more lightweight machines or VMs should suffice for running 
ZooKeeper.
+[ZooKeeper](https://zookeeper.apache.org) manages a variety of essential 
coordination- and configuration-related tasks for Pulsar. To deploy a Pulsar 
cluster you need to deploy ZooKeeper first (before all other components). You 
had better deploy a 3-node ZooKeeper cluster. Pulsar does not make heavy use of 
ZooKeeper, so more lightweight machines or VMs should suffice for running 
ZooKeeper.
 
-To begin, add all ZooKeeper servers to the configuration specified in 
[`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar 
directory you created [above](#installing-the-pulsar-binary-package)). Here's 
an example:
+To begin, add all ZooKeeper servers to the configuration specified in 
[`conf/zookeeper.conf`](reference-configuration.md#zookeeper) (in the Pulsar 
directory that you create [above](#install-the-pulsar-binary-package)). The 
following is an example:
 
 ```properties
 server.1=zk1.us-west.example.com:2888:3888
@@ -189,26 +186,26 @@ server.3=zk3.us-west.example.com:2888:3888
 
 > If you have only one machine to deploy Pulsar, you just need to add one 
 > server entry in the configuration file.
 
-On each host, you need to specify the ID of the node in each node's `myid` 
file, which is in each server's `data/zookeeper` folder by default (this can be 
changed via the [`dataDir`](reference-configuration.md#zookeeper-dataDir) 
parameter).
+On each host, you need to specify the ID of the node in the `myid` file of 
each node, which is in each `data/zookeeper` folder of server by default (you 
can change the file location via the 
[`dataDir`](reference-configuration.md#zookeeper-dataDir) parameter).
 
-> See the [Multi-server setup 
guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup)
 in the ZooKeeper documentation for detailed info on `myid` and more.
+> See the [Multi-server setup 
guide](https://zookeeper.apache.org/doc/r3.4.10/zookeeperAdmin.html#sc_zkMulitServerSetup)
 in the ZooKeeper documentation for detailed information on `myid` and more.
 
-On a ZooKeeper server at `zk1.us-west.example.com`, for example, you could set 
the `myid` value like this:
+On a ZooKeeper server at `zk1.us-west.example.com`, for example, you can set 
the `myid` value like this:
 
 ```bash
 $ mkdir -p data/zookeeper
 $ echo 1 > data/zookeeper/myid
 ```
 
-On `zk2.us-west.example.com` the command would be `echo 2 > 
data/zookeeper/myid` and so on.
+On `zk2.us-west.example.com` the command is `echo 2 > data/zookeeper/myid` and 
so on.
 
-Once each server has been added to the `zookeeper.conf` configuration and has 
the appropriate `myid` entry, you can start ZooKeeper on all hosts (in the 
background, using nohup) with the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
+Once you add each server to the `zookeeper.conf` configuration and have the 
appropriate `myid` entry, you can start ZooKeeper on all hosts (in the 
background, using nohup) with the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
 
 ```bash
 $ bin/pulsar-daemon start zookeeper
 ```
 
-> If you are planning to deploy zookeeper with bookie on the same node, you
+> If you plan to deploy zookeeper with bookie on the same node, you
 > need to start zookeeper by using different stats port.
 
 Start zookeeper with [`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) 
CLI tool like:
@@ -217,11 +214,11 @@ Start zookeeper with 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI
 $ PULSAR_EXTRA_OPTS="-Dstats_server_port=8001" bin/pulsar-daemon start 
zookeeper
 ```
 
-## Initializing cluster metadata
+## Initialize cluster metadata
 
-Once you've deployed ZooKeeper for your cluster, there is some metadata that 
needs to be written to ZooKeeper for each cluster in your instance. It only 
needs to be written **once**.
+Once you deploy ZooKeeper for your cluster, you need to write some metadata to 
ZooKeeper for each cluster in your instance. You only need to write **once**.
 
-You can initialize this metadata using the 
[`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata)
 command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This 
command can be run on any machine in your ZooKeeper cluster. Here's an example:
+You can initialize this metadata using the 
[`initialize-cluster-metadata`](reference-cli-tools.md#pulsar-initialize-cluster-metadata)
 command of the [`pulsar`](reference-cli-tools.md#pulsar) CLI tool. This 
command can be run on any machine in your ZooKeeper cluster. The following is 
an example:
 
 ```shell
 $ bin/pulsar initialize-cluster-metadata \
@@ -234,17 +231,19 @@ $ bin/pulsar initialize-cluster-metadata \
   --broker-service-url-tls pulsar+ssl://pulsar.us-west.example.com:6651
 ```
 
-As you can see from the example above, the following needs to be specified:
+As you can see from the example above, you
+need to specify the following:
 
 Flag | Description
 :----|:-----------
 `--cluster` | A name for the cluster
 `--zookeeper` | A "local" ZooKeeper connection string for the cluster. This 
connection string only needs to include *one* machine in the ZooKeeper cluster.
 `--configuration-store` | The configuration store connection string for the 
entire instance. As with the `--zookeeper` flag, this connection string only 
needs to include *one* machine in the ZooKeeper cluster.
-`--web-service-url` | The web service URL for the cluster, plus a port. This 
URL should be a standard DNS name. The default port is 8080 (we don't recommend 
using a different port).
-`--web-service-url-tls` | If you're using [TLS](security-tls-transport.md), 
you'll also need to specify a TLS web service URL for the cluster. The default 
port is 8443 (we don't recommend using a different port).
-`--broker-service-url` | A broker service URL enabling interaction with the 
brokers in the cluster. This URL should use the same DNS name as the web 
service URL but should use the `pulsar` scheme instead. The default port is 
6650 (we don't recommend using a different port).
-`--broker-service-url-tls` | If you're using [TLS](security-tls-transport.md), 
you'll also need to specify a TLS web service URL for the cluster as well as a 
TLS broker service URL for the brokers in the cluster. The default port is 6651 
(we don't recommend using a different port).
+`--web-service-url` | The web service URL for the cluster, plus a port. This 
URL should be a standard DNS name. The default port is 8080 (you had better not 
use a different port).
+`--web-service-url-tls` | If you use [TLS](security-tls-transport.md), you 
also need to specify a TLS web service URL for the cluster. The default port is 
8443 (you had better not use a different port).
+`--broker-service-url` | A broker service URL enabling interaction with the 
brokers in the cluster. This URL should not use the same DNS name as the web 
service URL but should use the `pulsar` scheme instead. The default port is 
6650 (you had better not use a different port).
+`--broker-service-url-tls` | If you use [TLS](security-tls-transport.md), you 
also need to specify a TLS web service URL for the cluster as well as a TLS 
broker service URL for the brokers in the cluster. The default port is 6651 
(you had better not use a different port).
+
 
 > If you don't have a DNS server, you can use multi-host in service URL with 
 > the following settings:
 >
@@ -255,28 +254,28 @@ Flag | Description
 > --broker-service-url-tls pulsar+ssl://host1:6651,host2:6651,host3:6651
 > ```
 
-## Deploying a BookKeeper cluster
+## Deploy a BookKeeper cluster
 
-[BookKeeper](https://bookkeeper.apache.org) handles all persistent data 
storage in Pulsar. You will need to deploy a cluster of BookKeeper bookies to 
use Pulsar. We recommend running a **3-bookie BookKeeper cluster**.
+[BookKeeper](https://bookkeeper.apache.org) handles all persistent data 
storage in Pulsar. You need to deploy a cluster of BookKeeper bookies to use 
Pulsar. You can choose to run a **3-bookie BookKeeper cluster**.
 
-BookKeeper bookies can be configured using the 
[`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration 
file. The most important step in configuring bookies for our purposes here is 
ensuring that the 
[`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the 
connection string for the ZooKeeper cluster. Here's an example:
+You can configure BookKeeper bookies using the 
[`conf/bookkeeper.conf`](reference-configuration.md#bookkeeper) configuration 
file. The most important step in configuring bookies for our purposes here is 
ensuring that the 
[`zkServers`](reference-configuration.md#bookkeeper-zkServers) is set to the 
connection string for the ZooKeeper cluster. The following is an example:
 
 ```properties
 
zkServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 ```
 
-Once you've appropriately modified the `zkServers` parameter, you can provide 
any other configuration modifications you need. You can find a full listing of 
the available BookKeeper configuration parameters 
[here](reference-configuration.md#bookkeeper), although we would recommend 
consulting the [BookKeeper 
documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for 
a more in-depth guide.
+Once you appropriately modify the `zkServers` parameter, you can provide any 
other configuration modifications you need. You can find a full listing of the 
available BookKeeper configuration parameters 
[here](reference-configuration.md#bookkeeper), although consulting the 
[BookKeeper 
documentation](http://bookkeeper.apache.org/docs/latest/reference/config/) for 
a more in-depth guide might be a better choice.
 
 > ##### NOTES
 >
-> Since Pulsar 2.1.0 release, Pulsar introduces [stateful 
function](functions-state.md) for Pulsar Functions. If you would like to enable 
that feature,
-> you need to enable table service on BookKeeper by setting following setting 
in `conf/bookkeeper.conf` file.
+> Since Pulsar 2.1.0 releases, Pulsar introduces [stateful 
function](functions-state.md) for Pulsar Functions. If you want to enable that 
feature,
+> you need to enable table service on BookKeeper by doing the following 
setting in `conf/bookkeeper.conf` file.
 >
 > ```conf
 > extraServerComponents=org.apache.bookkeeper.stream.server.StreamStorageLifecycleComponent
 > ```
 
-Once you've applied the desired configuration in `conf/bookkeeper.conf`, you 
can start up a bookie on each of your BookKeeper hosts. You can start up each 
bookie either in the background, using 
[nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
+Once you apply the desired configuration in `conf/bookkeeper.conf`, you can 
start up a bookie on each of your BookKeeper hosts. You can start up each 
bookie either in the background, using 
[nohup](https://en.wikipedia.org/wiki/Nohup), or in the foreground.
 
 To start the bookie in the background, use the 
[`pulsar-daemon`](reference-cli-tools.md#pulsar-daemon) CLI tool:
 
@@ -290,44 +289,44 @@ To start the bookie in the foreground:
 $ bin/bookkeeper bookie
 ```
 
-You can verify that a bookie is working properly by running the `bookiesanity` 
command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
+You can verify that a bookie works properly by running the `bookiesanity` 
command for the [BookKeeper shell](reference-cli-tools.md#shell) on it:
 
 ```bash
 $ bin/bookkeeper shell bookiesanity
 ```
 
-This will create an ephemeral BookKeeper ledger on the local bookie, write a 
few entries, read them back, and finally delete the ledger.
+This command creates an ephemeral BookKeeper ledger on the local bookie, 
writes a few entries, reads them back, and finally deletes the ledger.
 
-After you have started all the bookies, you can use `simpletest` command for 
[BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
+After you start all the bookies, you can use `simpletest` command for 
[BookKeeper shell](reference-cli-tools.md#shell) on any bookie node, to
 verify all the bookies in the cluster are up running.
 
 ```bash
 $ bin/bookkeeper shell simpletest --ensemble <num-bookies> --writeQuorum 
<num-bookies> --ackQuorum <num-bookies> --numEntries <num-entries>
 ```
 
-This command will create a `num-bookies` sized ledger on the cluster, write a 
few entries, and finally delete the ledger.
+This command creates a `num-bookies` sized ledger on the cluster, writes a few 
entries, and finally deletes the ledger.
 
 
-## Deploying Pulsar brokers
+## Deploy Pulsar brokers
 
-Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. 
Brokers handle Pulsar messages and provide Pulsar's administrative interface. 
We recommend running **3 brokers**, one for each machine that's already running 
a BookKeeper bookie.
+Pulsar brokers are the last thing you need to deploy in your Pulsar cluster. 
Brokers handle Pulsar messages and provide the administrative interface of 
Pulsar. A good choice is to run **3 brokers**, one for each machine that 
already runs a BookKeeper bookie.
 
-### Configuring Brokers
+### Configure Brokers
 
-The most important element of broker configuration is ensuring that each 
broker is aware of the ZooKeeper cluster that you've deployed. Make sure that 
the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) 
and 
[`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers)
 parameters. In this case, since we only have 1 cluster and no configuration 
store setup, the `configurationStoreServers` will point to the same 
`zookeeperServers`.
+The most important element of broker configuration is ensuring that each 
broker is aware of the ZooKeeper cluster that you have deployed. Make sure that 
the [`zookeeperServers`](reference-configuration.md#broker-zookeeperServers) 
and 
[`configurationStoreServers`](reference-configuration.md#broker-configurationStoreServers)
 parameters. In this case, since you only have 1 cluster and no configuration 
store setup, the `configurationStoreServers` point to the same 
`zookeeperServers`.
 
 ```properties
 
zookeeperServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 
configurationStoreServers=zk1.us-west.example.com:2181,zk2.us-west.example.com:2181,zk3.us-west.example.com:2181
 ```
 
-You also need to specify the cluster name (matching the name that you provided 
when [initializing the cluster's metadata](#initializing-cluster-metadata)):
+You also need to specify the cluster name (matching the name that you provide 
when you [initialize the metadata of the 
cluster](#initialize-cluster-metadata)):
 
 ```properties
 clusterName=pulsar-cluster-1
 ```
 
-In addition, you need to match the broker and web service ports provided when 
initializing the cluster's metadata (especially when using a different port 
from default):
+In addition, you need to match the broker and web service ports provided when 
you initialize the metadata of the cluster (especially when you use a different 
port from default):
 
 ```properties
 brokerServicePort=6650
@@ -349,7 +348,7 @@ webServicePortTls=8443
 > managedLedgerDefaultAckQuorum=1
 > ```
 
-### Enabling Pulsar Functions (optional)
+### Enable Pulsar Functions (optional)
 
 If you want to enable [Pulsar Functions](functions-overview.md), you can 
follow the instructions as below:
 
@@ -359,17 +358,17 @@ If you want to enable [Pulsar 
Functions](functions-overview.md), you can follow
     functionsWorkerEnabled=true
     ```
 
-2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the 
cluster name that you provided when [initializing the cluster's 
metadata](#initializing-cluster-metadata). 
+2. Edit `conf/functions_worker.yml` and set `pulsarFunctionsCluster` to the 
cluster name that you provide when you [initialize the metadata of the 
cluster](#initialize-cluster-metadata). 
 
     ```conf
     pulsarFunctionsCluster: pulsar-cluster-1
     ```
 
-If you would like to learn more options about deploying functions worker, 
please checkout [Deploy and manage functions worker](functions-worker.md).
+If you want to learn more options about deploying functions worker, checkout 
[Deploy and manage functions worker](functions-worker.md).
 
-### Starting Brokers
+### Start Brokers
 
-You can then provide any other configuration changes that you'd like in the 
[`conf/broker.conf`](reference-configuration.md#broker) file. Once you've 
decided on a configuration, you can start up the brokers for your Pulsar 
cluster. Like ZooKeeper and BookKeeper, brokers can be started either in the 
foreground or in the background, using nohup.
+You can then provide any other configuration changes that you want in the 
[`conf/broker.conf`](reference-configuration.md#broker) file. Once you decide 
on a configuration, you can start up the brokers for your Pulsar cluster. Like 
ZooKeeper and BookKeeper, you can start brokers either in the foreground or in 
the background, using nohup.
 
 You can start a broker in the foreground using the [`pulsar 
broker`](reference-cli-tools.md#pulsar-broker) command:
 
@@ -383,13 +382,13 @@ You can start a broker in the background using the 
[`pulsar-daemon`](reference-c
 $ bin/pulsar-daemon start broker
 ```
 
-Once you've succesfully started up all the brokers you intend to use, your 
Pulsar cluster should be ready to go!
+Once you succesfully start up all the brokers that you intend to use, your 
Pulsar cluster should be ready to go!
 
-## Connecting to the running cluster
+## Connect to the running cluster
 
-Once your Pulsar cluster is up and running, you should be able to connect with 
it using Pulsar clients. One such client is the 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included 
with the Pulsar binary package. The `pulsar-client` tool can publish messages 
to and consume messages from Pulsar topics and thus provides a simple way to 
make sure that your cluster is runnning properly.
+Once your Pulsar cluster is up and running, you should be able to connect with 
it using Pulsar clients. One such client is the 
[`pulsar-client`](reference-cli-tools.md#pulsar-client) tool, which is included 
with the Pulsar binary package. The `pulsar-client` tool can publish messages 
to and consume messages from Pulsar topics and thus provide a simple way to 
make sure that your cluster runs properly.
 
-To use the `pulsar-client` tool, first modify the client configuration file in 
[`conf/client.conf`](reference-configuration.md#client) in your binary package. 
You'll need to change the values for `webServiceUrl` and `brokerServiceUrl`, 
substituting `localhost` (which is the default), with the DNS name that you've 
assigned to your broker/bookie hosts. Here's an example:
+To use the `pulsar-client` tool, first modify the client configuration file in 
[`conf/client.conf`](reference-configuration.md#client) in your binary package. 
You need to change the values for `webServiceUrl` and `brokerServiceUrl`, 
substituting `localhost` (which is the default), with the DNS name that you 
assign to your broker/bookie hosts. The following is an example:
 
 ```properties
 webServiceUrl=http://us-west.example.com:8080
@@ -403,7 +402,7 @@ brokerServiceurl=pulsar://us-west.example.com:6650
 > brokerServiceurl=pulsar://host1:6650,host2:6650,host3:6650
 > ```
 
-Once you've done that, you can publish a message to Pulsar topic:
+Once you do that, you can publish a message to Pulsar topic:
 
 ```bash
 $ bin/pulsar-client produce \
@@ -412,9 +411,9 @@ $ bin/pulsar-client produce \
   -m "Hello Pulsar"
 ```
 
-> You may need to use a different cluster name in the topic if you specified a 
cluster name different from `pulsar-cluster-1`.
+> You may need to use a different cluster name in the topic if you specify a 
cluster name different from `pulsar-cluster-1`.
 
-This will publish a single message to the Pulsar topic. In addition, you can 
subscribe the Pulsar topic in a different terminal before publishing messages 
as below:
+This command publishes a single message to the Pulsar topic. In addition, you 
can subscribe the Pulsar topic in a different terminal before publishing 
messages as below:
 
 ```bash
 $ bin/pulsar-client consume \
@@ -424,16 +423,16 @@ $ bin/pulsar-client consume \
   -t "Exclusive"
 ```
 
-Once the message above has been successfully published to the topic, you 
should see it in the standard output:
+Once you successfully publish the message above to the topic, you should see 
it in the standard output:
 
 ```bash
 ----- got message -----
 Hello Pulsar
 ```
 
-## Running Functions
+## Run Functions
 
-> If you have [enabled](#enabling-pulsar-functions-optional) Pulsar Functions, 
you can also tryout pulsar functions now.
+> If you have [enabled](#enable-pulsar-functions-optional) Pulsar Functions, 
you can also tryout pulsar functions now.
 
 Create a ExclamationFunction `exclamation`.
 
@@ -448,13 +447,13 @@ bin/pulsar-admin functions create \
   --name exclamation
 ```
 
-Check if the function is running as expected by 
[triggering](functions-deploying.md#triggering-pulsar-functions) the function.
+Check if the function runs as expected by 
[triggering](functions-deploying.md#triggering-pulsar-functions) the function.
 
 ```bash
 bin/pulsar-admin functions trigger --name exclamation --trigger-value "hello 
world"
 ```
 
-You will see output as below:
+You can see the output as below:
 
 ```shell
 hello world!

Reply via email to