In general wouldn't it be better to change the headlines to active voice? Create Monitors instead of Creating Monitors?

More suggestions inline.

On 11/4/19 2:52 PM, Alwin Antreich wrote:
Put the previous added sections into subsection for a better outline of
the TOC.

With the rearrangement of the first level titles to second level, the
general descriptions of a service needs to move into the new first level
titles. And add/corrects some statements of those descriptions.

Signed-off-by: Alwin Antreich <a.antre...@proxmox.com>
---
  pveceph.adoc | 79 ++++++++++++++++++++++++++++++----------------------
  1 file changed, 45 insertions(+), 34 deletions(-)

diff --git a/pveceph.adoc b/pveceph.adoc
index 9806401..2972a68 100644
--- a/pveceph.adoc
+++ b/pveceph.adoc
@@ -234,11 +234,8 @@ configuration file.
[[pve_ceph_monitors]]
-Creating Ceph Monitors
-----------------------
-
-[thumbnail="screenshot/gui-ceph-monitor.png"]
-
+Ceph Monitor
+-----------
  The Ceph Monitor (MON)
  footnote:[Ceph Monitor http://docs.ceph.com/docs/luminous/start/intro/]
  maintains a master copy of the cluster map. For high availability you need to
@@ -247,6 +244,12 @@ used the installation wizard. You won't need more than 3 
monitors as long
  as your cluster is small to midsize, only really large clusters will
  need more than that.
+
+Creating Monitors
+~~~~~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-ceph-monitor.png"]
+
  On each node where you want to place a monitor (three monitors are 
recommended),
  create it by using the 'Ceph -> Monitor' tab in the GUI or run.
@@ -256,12 +259,9 @@ create it by using the 'Ceph -> Monitor' tab in the GUI or run.
  pveceph mon create
  ----
-This will also install the needed Ceph Manager ('ceph-mgr') by default. If you
-do not want to install a manager, specify the '-exclude-manager' option.
-
-Destroying Ceph Monitor
-----------------------
+Destroying Monitors
+~~~~~~~~~~~~~~~~~~~
[thumbnail="screenshot/gui-ceph-monitor-destroy.png"] @@ -280,16 +280,19 @@ NOTE: At least three Monitors are needed for quorum. [[pve_ceph_manager]]
-Creating Ceph Manager
-----------------------
+Ceph Manager
+------------
+The Manager daemon runs alongside the monitors, providing an interface for
+monitoring the cluster. Since the Ceph luminous release at least one ceph-mgr
+footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon is
+required.

The Manager daemon runs alongside the monitors. It provides an interface to monitor the cluster. ...

+
+Creating Manager
+~~~~~~~~~~~~~~~~
[thumbnail="screenshot/gui-ceph-manager.png"] -The Manager daemon runs alongside the monitors, providing an interface for
-monitoring the cluster. Since the Ceph luminous release the
-ceph-mgr footnote:[Ceph Manager http://docs.ceph.com/docs/luminous/mgr/] daemon
-is required. During monitor installation the ceph manager will be installed as
-well.
+You can install multiple Manager, but at any time only one Manager is active.

Multiple Managers can be installed, but at any time only one Manager is active.

[source,bash]
  ----
@@ -300,8 +303,8 @@ NOTE: It is recommended to install the Ceph Manager on the 
monitor nodes. For
  high availability install more then one manager.
-Destroying Ceph Manager
-----------------------
+Destroying Manager
+~~~~~~~~~~~~~~~~~~
[thumbnail="screenshot/gui-ceph-manager-destroy.png"] @@ -321,8 +324,15 @@ the cluster status or usage require a running Manager. [[pve_ceph_osds]]
-Creating Ceph OSDs
-------------------
+Ceph OSDs
+---------
+Ceph **O**bject **S**torage **D**aemons are storing objects for Ceph over the
+network. In a Ceph cluster, you will usually have one OSD per physical disk.

One OSD per physical disk is the general recommendation.
# or
It is recommended to use one OSD per physical disk.

# not too sure though which sounds better. having multiple OSDs on one physical disk is a bad idea in most situations AFAIU right?


+
+NOTE: By default an object is 4 MiB in size.
+
+Creating OSDs
+~~~~~~~~~~~~~
[thumbnail="screenshot/gui-ceph-osd-status.png"] @@ -346,8 +356,7 @@ ceph-volume lvm zap /dev/sd[X] --destroy WARNING: The above command will destroy data on the disk! -Ceph Bluestore
-~~~~~~~~~~~~~~
+.Ceph Bluestore
Starting with the Ceph Kraken release, a new Ceph OSD storage type was
  introduced, the so called Bluestore
@@ -386,8 +395,7 @@ internal journal or write-ahead log. It is recommended to 
use a fast SSD or
  NVRAM for better performance.
-Ceph Filestore
-~~~~~~~~~~~~~~
+.Ceph Filestore
Before Ceph Luminous, Filestore was used as default storage type for Ceph OSDs.
  Starting with Ceph Nautilus, {pve} does not support creating such OSDs with
@@ -399,8 +407,8 @@ Starting with Ceph Nautilus, {pve} does not support 
creating such OSDs with
  ceph-volume lvm create --filestore --data /dev/sd[X] --journal /dev/sd[Y]
  ----
-Destroying Ceph OSDs
---------------------
+Destroying OSDs
+~~~~~~~~~~~~~~~
[thumbnail="screenshot/gui-ceph-osd-destroy.png"] @@ -431,14 +439,17 @@ WARNING: The above command will destroy data on the disk! [[pve_ceph_pools]]
-Creating Ceph Pools
--------------------
-
-[thumbnail="screenshot/gui-ceph-pools.png"]
-
+Ceph Pools
+----------
  A pool is a logical group for storing objects. It holds **P**lacement
  **G**roups (`PG`, `pg_num`), a collection of objects.
+
+Creating Pools
+~~~~~~~~~~~~~~
+
+[thumbnail="screenshot/gui-ceph-pools.png"]
+
  When no options are given, we set a default of **128 PGs**, a **size of 3
  replicas** and a **min_size of 2 replicas** for serving objects in a degraded
  state.
@@ -470,8 +481,8 @@ http://docs.ceph.com/docs/luminous/rados/operations/pools/]
  manual.
-Destroying Ceph Pools
----------------------
+Destroying Pools
+~~~~~~~~~~~~~~~~
[thumbnail="screenshot/gui-ceph-pools-destroy.png"]
  To destroy a pool on the GUI, go to a PVE host under **Ceph -> Pools** and


_______________________________________________
pve-devel mailing list
pve-devel@pve.proxmox.com
https://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to