LGTM, I was planning to go through a lot of these myself when I had
the time, so thanks a lot for taking care of it!

On 22 December 2016 at 15:29, 'Brian Foley' via ganeti-devel
<[email protected]> wrote:
> Some of these are also in code comments, but don't touch any string
> literals or error messages, so there should be no functional change.
>
> Signed-off-by: Brian Foley <[email protected]>
> ---
>  NEWS                                     |  8 ++++----
>  configure.ac                             |  2 +-
>  doc/cluster-keys-replacement.rst         |  2 +-
>  doc/design-allocation-efficiency.rst     |  2 +-
>  doc/design-autorepair.rst                |  8 ++++----
>  doc/design-bulk-create.rst               |  2 +-
>  doc/design-chained-jobs.rst              |  4 ++--
>  doc/design-configlock.rst                | 12 ++++++------
>  doc/design-cpu-pinning.rst               |  1 +
>  doc/design-cpu-speed.rst                 |  2 +-
>  doc/design-daemons.rst                   |  6 +++---
>  doc/design-dedicated-allocation.rst      |  2 +-
>  doc/design-file-based-storage.rst        |  4 ++--
>  doc/design-hugepages-support.rst         |  4 ++--
>  doc/design-impexp2.rst                   |  4 ++--
>  doc/design-linuxha.rst                   |  2 +-
>  doc/design-location.rst                  |  2 +-
>  doc/design-monitoring-agent.rst          | 14 +++++++-------
>  doc/design-multi-reloc.rst               |  6 +++---
>  doc/design-network.rst                   |  6 +++---
>  doc/design-node-add.rst                  |  1 +
>  doc/design-node-security.rst             | 24 ++++++++++++------------
>  doc/design-oob.rst                       |  1 +
>  doc/design-opportunistic-locking.rst     |  1 +
>  doc/design-optables.rst                  |  2 +-
>  doc/design-ovf-support.rst               |  2 +-
>  doc/design-reservations.rst              |  6 +++---
>  doc/design-resource-model.rst            |  8 ++++----
>  doc/design-restricted-commands.rst       |  1 +
>  doc/design-shared-storage-redundancy.rst |  8 ++++----
>  doc/design-shared-storage.rst            |  6 +++---
>  doc/design-ssh-ports.rst                 |  2 +-
>  doc/design-storagetypes.rst              |  2 +-
>  doc/design-sync-rate-throttling.rst      |  1 +
>  doc/design-upgrade.rst                   | 10 +++++-----
>  lib/client/gnt_cluster.py                |  2 +-
>  lib/client/gnt_job.py                    |  2 +-
>  man/gnt-cluster.rst                      |  4 ++--
>  man/mon-collector.rst                    |  4 ++--
>  src/Ganeti/Confd/Server.hs               |  4 ++--
>  src/Ganeti/THH.hs                        |  2 +-
>  src/Ganeti/WConfd/ConfigState.hs         |  2 +-
>  42 files changed, 97 insertions(+), 91 deletions(-)
>
> diff --git a/NEWS b/NEWS
> index 123bc99..b193c4b 100644
> --- a/NEWS
> +++ b/NEWS
> @@ -1607,7 +1607,7 @@ Version 2.10.6
>
>  *(Released Mon, 30 Jun 2014)*
>
> -- Make Ganeti tolerant towards differnt openssl library
> +- Make Ganeti tolerant towards different openssl library
>    version on different nodes (issue 853).
>  - Allow hspace to make useful predictions in multi-group
>    clusters with one group overfull (isse 861).
> @@ -1748,7 +1748,7 @@ New features
>    specified if the destination cluster has a default iallocator.
>  - Users can now change the soundhw and cpuid settings for XEN hypervisors.
>  - Hail and hbal now have the (optional) capability of accessing average CPU
> -  load information through the monitoring deamon, and to use it to 
> dynamically
> +  load information through the monitoring daemon, and to use it to 
> dynamically
>    adapt the allocation of instances.
>  - Hotplug support. Introduce new option '--hotplug' to ``gnt-instance 
> modify``
>    so that disk and NIC modifications take effect without the need of actual
> @@ -2110,7 +2110,7 @@ Incompatible/important changes
>    '--file-storage-dir' and '--shared-file-storage-dir'.
>  - Cluster verification now includes stricter checks regarding the
>    default file and shared file storage directories. It now checks that
> -  the directories are explicitely allowed in the 'file-storage-paths' file 
> and
> +  the directories are explicitly allowed in the 'file-storage-paths' file and
>    that the directories exist on all nodes.
>  - The list of allowed disk templates in the instance policy and the list
>    of cluster-wide enabled disk templates is now checked for consistency
> @@ -4283,7 +4283,7 @@ fixes/small improvements/cleanups.
>  Significant features
>  ~~~~~~~~~~~~~~~~~~~~
>
> -The node deamon now tries to mlock itself into memory, unless the
> +The node daemon now tries to mlock itself into memory, unless the
>  ``--no-mlock`` flag is passed. It also doesn't fail if it can't write
>  its logs, and falls back to console logging. This allows emergency
>  features such as ``gnt-node powercycle`` to work even in the event of a
> diff --git a/configure.ac b/configure.ac
> index 9b5d06f..41d1856 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -638,7 +638,7 @@ AM_CONDITIONAL([GHC_LE_76], [$GHC --numeric-version | 
> grep -q '^7\.[[0-6]]\.'])
>
>  AC_MSG_CHECKING([checking for extra GHC flags])
>  GHC_BYVERSION_FLAGS=
> -# check for GHC supported flags that vary accross versions
> +# check for GHC supported flags that vary across versions
>  for flag in -fwarn-incomplete-uni-patterns; do
>    if $GHC -e '0' $flag >/dev/null 2>/dev/null; then
>     GHC_BYVERSION_FLAGS="$GHC_BYVERSION_FLAGS $flag"
> diff --git a/doc/cluster-keys-replacement.rst 
> b/doc/cluster-keys-replacement.rst
> index eb0b72b..cc21915 100644
> --- a/doc/cluster-keys-replacement.rst
> +++ b/doc/cluster-keys-replacement.rst
> @@ -27,7 +27,7 @@ Replacing SSL keys
>  The cluster-wide SSL key is stored in ``/var/lib/ganeti/server.pem``.
>  Besides that, since Ganeti 2.11, each node has an individual node
>  SSL key, which is stored in ``/var/lib/ganeti/client.pem``. This
> -client certificate is signed by the cluster-wide SSL certficate.
> +client certificate is signed by the cluster-wide SSL certificate.
>
>  To renew the individual node certificates, run this command::
>
> diff --git a/doc/design-allocation-efficiency.rst 
> b/doc/design-allocation-efficiency.rst
> index f375b27..292ec5e 100644
> --- a/doc/design-allocation-efficiency.rst
> +++ b/doc/design-allocation-efficiency.rst
> @@ -42,7 +42,7 @@ weight of 0.25, so that counting current violations still 
> dominate.
>
>  Another consequence of this metric change is that the value 0 is no longer
>  obtainable: as soon as we have DRBD instance, we have to reserve memory.
> -However, in most cases only differences of scores influence decissions made.
> +However, in most cases only differences of scores influence decisions made.
>  In the few cases, were absolute values of the cluster score are specified,
>  they are interpreted as relative to the theoretical minimum of the reserved
>  memory score.
> diff --git a/doc/design-autorepair.rst b/doc/design-autorepair.rst
> index 5ab446b..4c33e1e 100644
> --- a/doc/design-autorepair.rst
> +++ b/doc/design-autorepair.rst
> @@ -107,7 +107,7 @@ present at this time. While this is known we won't solve 
> these race
>  conditions in the first version.
>
>  It might also be useful to easily have an operation that tags all
> -instances matching a  filter on some charateristic. But again, this
> +instances matching a filter on some characteristic. But again, this
>  wouldn't be specific to this tag.
>
>  If there are multiple
> @@ -273,8 +273,8 @@ needs to look at it. To be decided).
>
>  A graph with the possible transitions follows; note that in the graph,
>  following the implementation, the two ``Needs repair`` states have been
> -coalesced into one; and the ``Suspended`` state disapears, for it
> -becames an attribute of the instance object (its auto-repair policy).
> +coalesced into one; and the ``Suspended`` state disappears, for it
> +becomes an attribute of the instance object (its auto-repair policy).
>
>  .. digraph:: "auto-repair-states"
>
> @@ -343,7 +343,7 @@ Possible repairs are:
>
>  Note that more than one of these operations may need to happen before a
>  full repair is completed (eg. if a drbd primary goes offline first a
> -failover will happen, then a replce-disks).
> +failover will happen, then a replace-disks).
>
>  The self-repair tool will first take care of all needs-repair instance
>  that can be brought into ``pending`` state, and transition them as
> diff --git a/doc/design-bulk-create.rst b/doc/design-bulk-create.rst
> index a16fdc3..37794b6 100644
> --- a/doc/design-bulk-create.rst
> +++ b/doc/design-bulk-create.rst
> @@ -58,7 +58,7 @@ of ``request`` dicts as described in :doc:`Operation 
> specific input
>  placements in the order of the ``request`` field.
>
>  In addition, the old ``allocate`` request type will be deprecated and at
> -latest in Ganeti 2.8 incooperated into this new request. Current code
> +latest in Ganeti 2.8 incorporated into this new request. Current code
>  will need slight adaption to work with the new request. This needs
>  careful testing.
>
> diff --git a/doc/design-chained-jobs.rst b/doc/design-chained-jobs.rst
> index 8f06dc0..56b1185 100644
> --- a/doc/design-chained-jobs.rst
> +++ b/doc/design-chained-jobs.rst
> @@ -33,7 +33,7 @@ Proposed changes
>  ================
>
>  With the implementation of :ref:`job priorities
> -<jqueue-job-priority-design>` the processing code was re-architectured
> +<jqueue-job-priority-design>` the processing code was re-architected
>  and became a lot more versatile. It now returns jobs to the queue in
>  case the locks for an opcode can't be acquired, allowing other
>  jobs/opcodes to be run in the meantime.
> @@ -160,7 +160,7 @@ following possibilities:
>  Based on these arguments, the proposal is to do the following:
>
>  - Rename ``JOB_STATUS_WAITLOCK`` constant to ``JOB_STATUS_WAITING`` to
> -  reflect its actual meanting: the job is waiting for something
> +  reflect its actual meaning: the job is waiting for something
>  - While waiting for dependencies and locks, jobs are in the "waiting"
>    status
>  - Export dependency information in lock monitor; example output::
> diff --git a/doc/design-configlock.rst b/doc/design-configlock.rst
> index 9e650c7..06f91cc 100644
> --- a/doc/design-configlock.rst
> +++ b/doc/design-configlock.rst
> @@ -11,7 +11,7 @@ Current state and shortcomings
>  ==============================
>
>  As a result of the :doc:`design-daemons`, the configuration is held
> -in a proccess different from the processes carrying out the Ganeti
> +in a process different from the processes carrying out the Ganeti
>  jobs. Therefore, job processes have to contact WConfD in order to
>  change the configuration. Of course, these modifications of the
>  configuration need to be synchronised.
> @@ -23,7 +23,7 @@ update the configuration is to
>
>  - acquire the ``ConfigLock`` from WConfD,
>
> -- read the configration,
> +- read the configuration,
>
>  - write the modified configuration, and
>
> @@ -53,7 +53,7 @@ Proposed changes for an incremental improvement
>
>  Ideally, jobs would just send patches for the configuration to WConfD
>  that are applied by means of atomically updating the respective ``IORef``.
> -This, however, would require chaning all of Ganeti's logical units in
> +This, however, would require changing all of Ganeti's logical units in
>  one big change. Therefore, we propose to keep the ``ConfigLock`` and,
>  step by step, reduce its impact till it eventually will be just used
>  internally in the WConfD process.
> @@ -66,7 +66,7 @@ a shared config lock, and therefore necessarily read-only, 
> will instead
>  use WConfD's ``readConfig`` used to obtain a snapshot of the configuration.
>  This will be done without modifying the locks. It is sound, as reads to
>  a Haskell ``IORef`` always yield a consistent value. From that snapshot
> -the required view is computed locally. This saves two lock-configurtion
> +the required view is computed locally. This saves two lock-configuration
>  write cycles per read and, additionally, does not block any concurrent
>  modifications.
>
> @@ -98,7 +98,7 @@ For a lot of operations, the regular locks already ensure 
> that only
>  one job can modify a certain part of the configuration. For example,
>  only jobs with an exclusive lock on an instance will modify that
>  instance. Therefore, it can update that entity atomically,
> -without relying on the configuration lock to achive consistency.
> +without relying on the configuration lock to achieve consistency.
>  ``WConfD`` will provide such operations. To
>  avoid interference with non-atomic operations that still take the
>  config lock and write the configuration as a whole, this operation
> @@ -111,7 +111,7 @@ triggering a writeout of the lock status.
>  Note that the thread handling the request has to take the lock in its
>  own name and not in that of the requesting job. A writeout of the lock
>  status can still happen, triggered by other requests. Now, if
> -``WConfD`` gets restarted after the lock acquisition, if that happend
> +``WConfD`` gets restarted after the lock acquisition, if that happened
>  in the name of the job, it would own a lock without knowing about it,
>  and hence that lock would never get released.
>
> diff --git a/doc/design-cpu-pinning.rst b/doc/design-cpu-pinning.rst
> index b0ada07..fa233df 100644
> --- a/doc/design-cpu-pinning.rst
> +++ b/doc/design-cpu-pinning.rst
> @@ -1,3 +1,4 @@
> +==================
>  Ganeti CPU Pinning
>  ==================
>
> diff --git a/doc/design-cpu-speed.rst b/doc/design-cpu-speed.rst
> index 7787b79..96fd0c3 100644
> --- a/doc/design-cpu-speed.rst
> +++ b/doc/design-cpu-speed.rst
> @@ -20,7 +20,7 @@ instances, even for a cluster not running at full capacity.
>
>  For for one resources, however, hardware differences are not taken into
>  account: CPU speed. For CPU, the load is measured by the ratio of used 
> virtual
> -to physical CPUs on the node. Balancing this measure implictly assumes
> +to physical CPUs on the node. Balancing this measure implicitly assumes
>  equal speed of all CPUs.
>
>
> diff --git a/doc/design-daemons.rst b/doc/design-daemons.rst
> index 425b6a8..069bff4 100644
> --- a/doc/design-daemons.rst
> +++ b/doc/design-daemons.rst
> @@ -138,7 +138,7 @@ proposed, and presented hereafter.
>    submitting jobs. Therefore, this daemon will also be the one responsible 
> with
>    managing the job queue. When a job needs to be executed, the LuxiD will 
> spawn
>    a separate process tasked with the execution of that specific job, thus 
> making
> -  it easier to terminate the job itself, if needeed.  When a job requires 
> locks,
> +  it easier to terminate the job itself, if needed.  When a job requires 
> locks,
>    LuxiD will request them from WConfD.
>    In order to keep availability of the cluster in case of failure of the 
> master
>    node, LuxiD will replicate the job queue to the other master candidates, by
> @@ -258,7 +258,7 @@ leaving the codebase in a consistent and usable state.
>     independent process. LuxiD will spawn a new (Python) process for every 
> single
>     job. The RPCs will remain unchanged, and the LU code will stay as is as 
> much
>     as possible.
> -   MasterD will cease to exist as a deamon on its own at this point, but not
> +   MasterD will cease to exist as a daemon on its own at this point, but not
>     before.
>
>  #. Improve job scheduling algorithm.
> @@ -480,7 +480,7 @@ protocol will allow the following operations on the set:
>    provided for convenience, it's redundant wrt. *list* and *update*. 
> Immediate,
>    never fails.
>
> -Addidional restrictions due to lock implications:
> +Additional restrictions due to lock implications:
>    Ganeti supports locks that act as if a lock on a whole group (like all 
> nodes)
>    were held. To avoid dead locks caused by the additional blockage of those
>    group locks, we impose certain restrictions. Whenever `A` is a group lock 
> and
> diff --git a/doc/design-dedicated-allocation.rst 
> b/doc/design-dedicated-allocation.rst
> index b0a81fc..661280f 100644
> --- a/doc/design-dedicated-allocation.rst
> +++ b/doc/design-dedicated-allocation.rst
> @@ -61,7 +61,7 @@ instance sizes.
>
>  If allocating in a node group with ``exclusive_storage`` set
>  to true, hail will try to minimise the pair of the lost-allocations
> -vector and the remaining disk space on the node afer, ordered
> +vector and the remaining disk space on the node after, ordered
>  lexicographically.
>
>  Example
> diff --git a/doc/design-file-based-storage.rst 
> b/doc/design-file-based-storage.rst
> index 3ce89f5..61a4070 100644
> --- a/doc/design-file-based-storage.rst
> +++ b/doc/design-file-based-storage.rst
> @@ -131,7 +131,7 @@ Disadvantages:
>
>  * stable, but not as much tested as loopback driver
>
> -3) ubklback driver
> +3) ublkback driver
>  ^^^^^^^^^^^^^^^^^^
>
>  The Xen Roadmap states "Work is well under way to implement a
> @@ -365,6 +365,6 @@ the file-based disk-template.
>  Other hypervisors
>  +++++++++++++++++
>
> -Other hypervisors have mostly differnet ways to make storage available
> +Other hypervisors have mostly different ways to make storage available
>  to their virtual instances/machines. This is beyond the scope of this
>  document.
> diff --git a/doc/design-hugepages-support.rst 
> b/doc/design-hugepages-support.rst
> index 62c4bce..d3174ae 100644
> --- a/doc/design-hugepages-support.rst
> +++ b/doc/design-hugepages-support.rst
> @@ -37,7 +37,7 @@ cluster level via the hypervisor parameter ``mem_path`` as::
>  This hypervisor parameter is inherited by all the instances as
>  default although it can be overriden at the instance level.
>
> -The following changes will be made to the inheritence behaviour.
> +The following changes will be made to the inheritance behaviour.
>
>  -  The hypervisor parameter   ``mem_path`` and all other hypervisor
>     parameters will be made available at the node group level (in
> @@ -47,7 +47,7 @@ The following changes will be made to the inheritence 
> behaviour.
>         $ gnt-group add/modify\
>         > -H hv:parameter=value
>
> -   This changes the hypervisor inheritence level as::
> +   This changes the hypervisor inheritance level as::
>
>       cluster -> group -> OS -> instance
>
> diff --git a/doc/design-impexp2.rst b/doc/design-impexp2.rst
> index 5b996fe..7ebc3f1 100644
> --- a/doc/design-impexp2.rst
> +++ b/doc/design-impexp2.rst
> @@ -89,7 +89,7 @@ import/export, allowing the certificate to be used as a 
> Certificate
>  Authority (CA). This worked by means of starting a new ``socat``
>  instance per instance import/export.
>
> -Under the version 2 model, a continously running HTTP server will be
> +Under the version 2 model, a continuously running HTTP server will be
>  used. This disallows the use of self-signed certificates for
>  authentication as the CA needs to be the same for all issued
>  certificates.
> @@ -264,7 +264,7 @@ issues) it should be retried using an exponential backoff 
> delay. The
>  opcode submitter can specify for how long the transfer should be
>  retried.
>
> -At the end of a transfer, succssful or not, the source cluster must be
> +At the end of a transfer, successful or not, the source cluster must be
>  notified. A the same time the RSA key needs to be destroyed.
>
>  Support for HTTP proxies can be implemented by setting
> diff --git a/doc/design-linuxha.rst b/doc/design-linuxha.rst
> index 1a6a473..6bf78fe 100644
> --- a/doc/design-linuxha.rst
> +++ b/doc/design-linuxha.rst
> @@ -101,7 +101,7 @@ as a cloned resource that is active on all nodes.
>
>  In partial mode it will always return success (and thus trigger a
>  failure only upon an HA level or network failure). Full mode, which
> -initially will not be implemented, couls also check for the node daemon
> +initially will not be implemented, could also check for the node daemon
>  being unresponsive or other local conditions (TBD).
>
>  When a failure happens the HA notification system will trigger on all
> diff --git a/doc/design-location.rst b/doc/design-location.rst
> index 9d0f7aa..cbd65c3 100644
> --- a/doc/design-location.rst
> +++ b/doc/design-location.rst
> @@ -65,7 +65,7 @@ static information into account, essentially amounts to 
> counting disks. In
>  this way, Ganeti will be willing to sacrifice equal numbers of disks on every
>  node in order to fulfill location requirements.
>
> -Appart from changing the balancedness metric, common-failure tags will
> +Apart from changing the balancedness metric, common-failure tags will
>  not have any other effect. In particular, as opposed to exclusion tags,
>  no hard guarantees are made: ``hail`` will try allocate an instance in
>  a common-failure avoiding way if possible, but still allocate the instance
> diff --git a/doc/design-monitoring-agent.rst b/doc/design-monitoring-agent.rst
> index 9c4871b..4185b3d 100644
> --- a/doc/design-monitoring-agent.rst
> +++ b/doc/design-monitoring-agent.rst
> @@ -85,7 +85,7 @@ the data collectors:
>  ``category``
>    A collector can belong to a given category of collectors (e.g.: storage
>    collectors, daemon collector). This means that it will have to provide a
> -  minumum set of prescribed fields, as documented for each category.
> +  minimum set of prescribed fields, as documented for each category.
>    This field will contain the name of the category the collector belongs to,
>    if any, or just the ``null`` value.
>
> @@ -175,7 +175,7 @@ in its ``data`` section, at least the following field:
>      It assumes a numeric value, encoded in such a way to allow using a bitset
>      to easily distinguish which states are currently present in the whole
>      cluster. If the bitwise OR of all the ``status`` fields is 0, the cluster
> -    is completely healty.
> +    is completely healthy.
>      The status codes are as follows:
>
>      ``0``
> @@ -206,7 +206,7 @@ in its ``data`` section, at least the following field:
>
>      If the status code is ``2``, the message should specify what has gone
>      wrong.
> -    If the status code is ``4``, the message shoud explain why it was not
> +    If the status code is ``4``, the message should explain why it was not
>      possible to determine a proper status.
>
>  The ``data`` section will also contain all the fields describing the gathered
> @@ -450,7 +450,7 @@ each representing one logical volume and providing the 
> following fields:
>    Type of LV segment.
>
>  ``seg_start``
> -  Offset within the LVto the start of the segment in bytes.
> +  Offset within the LV to the start of the segment in bytes.
>
>  ``seg_start_pe``
>    Offset within the LV to the start of the segment in physical extents.
> @@ -603,7 +603,7 @@ collector will provide the following fields:
>        The speed of the synchronization.
>
>      ``want``
> -      The desiderd speed of the synchronization.
> +      The desired speed of the synchronization.
>
>      ``speedUnit``
>        The measurement unit of the ``speed`` and ``want`` values. Expressed
> @@ -655,7 +655,7 @@ that is not generic enough be abstracted.
>
>  The ``kind`` field will be ``0`` (`Performance reporting collectors`_).
>
> -Each of the hypervisor data collectory will be of ``category``: 
> ``hypervisor``.
> +Each of the hypervisor data collectors will be of ``category``: 
> ``hypervisor``.
>
>  Node OS resources report
>  ++++++++++++++++++++++++
> @@ -869,7 +869,7 @@ from the nodes from a monitoring system and for Ganeti 
> itself.
>  One extra feature we may need is a way to query for only sub-parts of
>  the report (eg. instances status only). This can be done by passing
>  arguments to the HTTP GET, which will be defined when we get to this
> -funtionality.
> +functionality.
>
>  Finally the :doc:`autorepair system design <design-autorepair>`. system
>  (see its design) can be expanded to use the monitoring agent system as a
> diff --git a/doc/design-multi-reloc.rst b/doc/design-multi-reloc.rst
> index 039d51d..8105cab 100644
> --- a/doc/design-multi-reloc.rst
> +++ b/doc/design-multi-reloc.rst
> @@ -1,6 +1,6 @@
> -====================================
> -Moving instances accross node groups
> -====================================
> +===================================
> +Moving instances across node groups
> +===================================
>
>  This design document explains the changes needed in Ganeti to perform
>  instance moves across node groups. Reader familiarity with the following
> diff --git a/doc/design-network.rst b/doc/design-network.rst
> index 8b52f62..1b9d933 100644
> --- a/doc/design-network.rst
> +++ b/doc/design-network.rst
> @@ -34,7 +34,7 @@ b) The NIC network information is incomplete, lacking 
> netmask and
>     enable Ganeti nodes to become more self-contained and be able to
>     infer system configuration (e.g. /etc/network/interfaces content)
>     from Ganeti configuration. This should make configuration of
> -   newly-added nodes a lot easier and less dependant on external
> +   newly-added nodes a lot easier and less dependent on external
>     tools/procedures.
>
>  c) Instance placement must explicitly take network availability in
> @@ -112,7 +112,7 @@ reservation, using a TemporaryReservationManager.
>
>  It should be noted that IP pool management is performed only for IPv4
>  networks, as they are expected to be densely populated. IPv6 networks
> -can use different approaches, e.g. sequential address asignment or
> +can use different approaches, e.g. sequential address assignment or
>  EUI-64 addresses.
>
>  New NIC parameter: network
> @@ -255,7 +255,7 @@ Network addition/deletion
>   gnt-network add --network=192.168.100.0/28 --gateway=192.168.100.1 \
>                   --network6=2001:db8:2ffc::/64 --gateway6=2001:db8:2ffc::1 \
>                   --add-reserved-ips=192.168.100.10,192.168.100.11 net100
> -  (Checks for already exising name and valid IP values)
> +  (Checks for already existing name and valid IP values)
>   gnt-network remove network_name
>    (Checks if not connected to any nodegroup)
>
> diff --git a/doc/design-node-add.rst b/doc/design-node-add.rst
> index e1d460d..651c9e2 100644
> --- a/doc/design-node-add.rst
> +++ b/doc/design-node-add.rst
> @@ -1,3 +1,4 @@
> +=====================================
>  Design for adding a node to a cluster
>  =====================================
>
> diff --git a/doc/design-node-security.rst b/doc/design-node-security.rst
> index 1215277..4d14319 100644
> --- a/doc/design-node-security.rst
> +++ b/doc/design-node-security.rst
> @@ -17,7 +17,7 @@ Objective
>  Up till 2.10, Ganeti distributes security-relevant keys to all nodes,
>  including nodes that are neither master nor master-candidates. Those
>  keys are the private and public SSH keys for node communication and the
> -SSL certficate and private key for RPC communication. Objective of this
> +SSL certificate and private key for RPC communication. Objective of this
>  design is to limit the set of nodes that can establish ssh and RPC
>  connections to the master and master candidates.
>
> @@ -125,7 +125,7 @@ the current powers a user of the RAPI interface would 
> have. The
>  That means, an attacker that has access to the RAPI interface, can make
>  all non-master-capable nodes master-capable, and then increase the master
>  candidate pool size till all machines are master candidates (or at least
> -a particular machine that he is aming for). This means that with RAPI
> +a particular machine that he is aiming for). This means that with RAPI
>  access and a compromised normal node, one can make this node a master
>  candidate and then still have the power to compromise the whole cluster.
>
> @@ -137,7 +137,7 @@ To mitigate this issue, we propose the following changes:
>    set to ``False`` by default and can itself only be changed on the
>    commandline. In this design doc, we refer to the flag as the
>    "rapi flag" from here on.
> -- Only if the ``master_capabability_rapi_modifiable`` switch is set to
> +- Only if the ``master_capability_rapi_modifiable`` switch is set to
>    ``True``, it is possible to modify the master-capability flag of
>    nodes.
>
> @@ -168,7 +168,7 @@ However, we think these are rather confusing semantics of 
> the involved
>  flags and thus we go with proposed design.
>
>  Note that this change will break RAPI compatibility, at least if the
> -rapi flag is not explicitely set to ``True``. We made this choice to
> +rapi flag is not explicitly set to ``True``. We made this choice to
>  have the more secure option as default, because otherwise it is
>  unlikely to be widely used.
>
> @@ -272,7 +272,7 @@ it was issued, Ganeti does not do anything.
>
>  Note that when you demote a node from master candidate to normal node, 
> another
>  master-capable and normal node will be promoted to master candidate. For this
> -newly promoted node, the same changes apply as if it was explicitely 
> promoted.
> +newly promoted node, the same changes apply as if it was explicitly promoted.
>
>  The same behavior should be ensured for the corresponding rapi command.
>
> @@ -283,7 +283,7 @@ Offlining and onlining a node
>  When offlining a node, it immediately loses its role as master or master
>  candidate as well. When it is onlined again, it will become master
>  candidate again if it was so before. The handling of the keys should be done
> -in the same way as when the node is explicitely promoted or demoted to or 
> from
> +in the same way as when the node is explicitly promoted or demoted to or from
>  master candidate. See the previous section for details.
>
>  This affects the command:
> @@ -346,7 +346,7 @@ will be backed up and not simply overridden.
>  Downgrades
>  ~~~~~~~~~~
>
> -These downgrading steps will be implemtented from 2.13 to 2.12:
> +These downgrading steps will be implemented from 2.13 to 2.12:
>
>  - The master node's private/public key pair will be distributed to all
>    nodes (via SSH) and the individual SSH keys will be backed up.
> @@ -389,7 +389,7 @@ in the design.
>    and client certificate, we generate a common server certificate (and
>    the corresponding private key) for all nodes and a different client
>    certificate (and the corresponding private key) for each node. The
> -  server certificate will be self-signed. The client certficate will
> +  server certificate will be self-signed. The client certificate will
>    be signed by the server certificate. The client certificates will
>    use the node UUID as serial number to ensure uniqueness within the
>    cluster. They will use the host's hostname as the certificate
> @@ -466,7 +466,7 @@ Alternative proposals:
>    as trusted CAs. As this would have resulted in having to restart
>    noded on all nodes every time a node is added, removed, demoted
>    or promoted, this was not feasible and we switched to client
> -  certficates which are signed by the server certificate.
> +  certificates which are signed by the server certificate.
>  - Instead of generating a client certificate per node, one could think
>    of just generating two different client certificates, one for normal
>    nodes and one for master candidates. Noded could then just check if
> @@ -495,7 +495,7 @@ created. With our design, two certificates (and 
> corresponding keys)
>  need to be created, a server certificate to be distributed to all nodes
>  and a client certificate only to be used by this particular node. In the
>  following, we use the term node daemon certificate for the server
> -certficate only.
> +certificate only.
>
>  In the cluster configuration, the candidate map is created. It is
>  populated with the respective entry for the master node. It is also
> @@ -508,7 +508,7 @@ written to ssconf.
>  When a node is added, the server certificate is copied to the node (as
>  before). Additionally, a new client certificate (and the corresponding
>  private key) is created on the new node to be used only by the new node
> -as client certifcate.
> +as client certificate.
>
>  If the new node is a master candidate, the candidate map is extended by
>  the new node's data. As before, the updated configuration is distributed
> @@ -532,7 +532,7 @@ distributed to all nodes. If there was already an entry 
> for the node,
>  we override it.
>
>  On demotion of a master candidate, the node's entry in the candidate map
> -gets removed and the updated configuration gets redistibuted.
> +gets removed and the updated configuration gets redistributed.
>
>  The same procedure applied to onlining and offlining master candidates.
>
> diff --git a/doc/design-oob.rst b/doc/design-oob.rst
> index 78e468b..0af4464 100644
> --- a/doc/design-oob.rst
> +++ b/doc/design-oob.rst
> @@ -1,3 +1,4 @@
> +====================================
>  Ganeti Node OOB Management Framework
>  ====================================
>
> diff --git a/doc/design-opportunistic-locking.rst 
> b/doc/design-opportunistic-locking.rst
> index cd3da44..42bfef8 100644
> --- a/doc/design-opportunistic-locking.rst
> +++ b/doc/design-opportunistic-locking.rst
> @@ -1,3 +1,4 @@
> +====================================================================
>  Design for parallelized instance creations and opportunistic locking
>  ====================================================================
>
> diff --git a/doc/design-optables.rst b/doc/design-optables.rst
> index 6c0c1e0..23672df 100644
> --- a/doc/design-optables.rst
> +++ b/doc/design-optables.rst
> @@ -39,7 +39,7 @@ Proposed changes
>  ================
>
>  We propose to add filters on the job queue. These will be part of the
> -configuration and as such are persisted with it. Conceptionally, the
> +configuration and as such are persisted with it. Conceptually, the
>  filters are always processed when a job enters the queue and while it
>  is still in the queue. Of course, in the implementation, reevaluation
>  is only carried out, if something could make the result change, e.g.,
> diff --git a/doc/design-ovf-support.rst b/doc/design-ovf-support.rst
> index 1b972ae..a19ced2 100644
> --- a/doc/design-ovf-support.rst
> +++ b/doc/design-ovf-support.rst
> @@ -84,7 +84,7 @@ currently use OVF.
>    OpenStack: mostly ``.vmdk``
>
>  In our implementation of the OVF we allow a choice between raw, cow and
> -vmdk disk formats for both import and export. Other formats covertable
> +vmdk disk formats for both import and export. Other formats convertable
>  using ``qemu-img`` are allowed in import mode, but not tested.
>  The justification is the following:
>
> diff --git a/doc/design-reservations.rst b/doc/design-reservations.rst
> index b54071b..984a114 100644
> --- a/doc/design-reservations.rst
> +++ b/doc/design-reservations.rst
> @@ -83,8 +83,8 @@ Representation in the Configuration
>  -----------------------------------
>
>  As for most part of the system, forthcoming instances and their disks are to
> -be treated as if they were real. Therefore, the wire representation will
> -be by adding an additional, optional, ``fortcoming`` flag to the 
> ``instances``
> +be treated as if they were real. Therefore, the wire representation will be
> +by adding an additional, optional, ``forthcoming`` flag to the ``instances``
>  and ``disks`` objects. Additionally, the internal consistency condition will
>  be relaxed to have all non-uuid fields optional if an instance or disk is
>  forthcoming.
> @@ -112,7 +112,7 @@ and the remaining fields depend on the value of that 
> field). Of course, in
>  the Haskell part of our code base, this will be represented in the standard 
> way
>  having two constructors for the type; additionally there will be accessors
>  for all the fields of the JSON representation (yielding ``Maybe`` values,
> -as they can be optional if we're in the ``Forthcoming`` constuctor).
> +as they can be optional if we're in the ``Forthcoming`` constructor).
>
>
>  Adaptions of htools
> diff --git a/doc/design-resource-model.rst b/doc/design-resource-model.rst
> index 8131848..3909a26 100644
> --- a/doc/design-resource-model.rst
> +++ b/doc/design-resource-model.rst
> @@ -629,7 +629,7 @@ in these structures:
>  +---------------+----------------------------------+--------------+
>  |disk_size      |Allowed disk size                 |int           |
>  +---------------+----------------------------------+--------------+
> -|nic_count      |Alowed NIC count                  |int           |
> +|nic_count      |Allowed NIC count                 |int           |
>  +---------------+----------------------------------+--------------+
>
>  Inheritance
> @@ -709,7 +709,7 @@ brackets.
>  
> +========+==============+=========================+=====================+======+
>  |plain   |stripes       |How many stripes to use  |Configured at        |int 
>   |
>  |        |              |for newly created (plain)|./configure time, not|    
>   |
> -|        |              |logical voumes           |overridable at       |    
>   |
> +|        |              |logical volumes          |overridable at       |    
>   |
>  |        |              |                         |runtime              |    
>   |
>  
> +--------+--------------+-------------------------+---------------------+------+
>  |drbd    |data-stripes  |How many stripes to use  |Same as for          |int 
>   |
> @@ -829,7 +829,7 @@ For the new memory model, we'll add the following 
> parameters, in a
>  dictionary indexed by the hypervisor name (node attribute
>  ``hv_state``). The rationale is that, even though multi-hypervisor
>  clusters are rare, they make sense sometimes, and thus we need to
> -support multipe node states (one per hypervisor).
> +support multiple node states (one per hypervisor).
>
>  Since usually only one of the multiple hypervisors is the 'main' one
>  (and the others used sparringly), capacity computation will still only
> @@ -893,7 +893,7 @@ are at node group level); the proposal is to do this via 
> a cluster-level
>
>  Beside the per-hypervisor attributes, we also have disk attributes,
>  which are queried directly on the node (without hypervisor
> -involvment). The are stored in a separate attribute (``disk_state``),
> +involvement). The are stored in a separate attribute (``disk_state``),
>  which is indexed per storage type and name; currently this will be just
>  ``DT_PLAIN`` and the volume name as key.
>
> diff --git a/doc/design-restricted-commands.rst 
> b/doc/design-restricted-commands.rst
> index 167c816..e1ed4de 100644
> --- a/doc/design-restricted-commands.rst
> +++ b/doc/design-restricted-commands.rst
> @@ -1,3 +1,4 @@
> +=====================================
>  Design for executing commands via RPC
>  =====================================
>
> diff --git a/doc/design-shared-storage-redundancy.rst 
> b/doc/design-shared-storage-redundancy.rst
> index 14e8bc1..7dea87d 100644
> --- a/doc/design-shared-storage-redundancy.rst
> +++ b/doc/design-shared-storage-redundancy.rst
> @@ -5,7 +5,7 @@ N+1 redundancy for shared storage
>  .. contents:: :depth: 4
>
>  This document describes how N+1 redundancy is achieved
> -for instanes using shared storage.
> +for instances using shared storage.
>
>
>  Current state and shortcomings
> @@ -44,7 +44,7 @@ for DRBD is to be taken into account for all choices 
> affecting instance
>  location, including instance allocation and balancing.
>
>  For shared-storage instances, they can move everywhere within the
> -node group. So, in practise, this is mainly a question of capacity
> +node group. So, in practice, this is mainly a question of capacity
>  planing, especially is most instances have the same size. Nevertheless,
>  offcuts if instances don't fill a node entirely may not be ignored.
>
> @@ -53,7 +53,7 @@ Modifications to existing tools
>  -------------------------------
>
>  - ``hail`` will compute and rank possible allocations as usual. However,
> -  before returing a choice it will filter out allocations that are
> +  before returning a choice it will filter out allocations that are
>    not N+1 redundant.
>
>  - Normal ``gnt-cluster verify`` will not be changed; in particular,
> @@ -68,6 +68,6 @@ Modifications to existing tools
>  - ``hspace`` computing the capacity for DRBD instances will be unchanged.
>    For shared storage instances, however, it will first evacuate one node
>    and then compute capacity as normal pretending that node was offline.
> -  While this technically deviates from interatively doing what hail does,
> +  While this technically deviates from interactively doing what hail does,
>    it should still give a reasonable estimate of the cluster capacity without
>    significantly increasing the algorithmic complexity.
> diff --git a/doc/design-shared-storage.rst b/doc/design-shared-storage.rst
> index 0390264..8f34f76 100644
> --- a/doc/design-shared-storage.rst
> +++ b/doc/design-shared-storage.rst
> @@ -284,15 +284,15 @@ command line::
>                                              param1=value1,param2=value2
>
>  The above parameters will be exported to the ExtStorage provider's
> -scripts as the enviromental variables:
> +scripts as the environment variables:
>
>  - `EXTP_PARAM1 = str(value1)`
>  - `EXTP_PARAM2 = str(value2)`
>
>  We will also introduce a new Ganeti client called `gnt-storage` which
>  will be used to diagnose ExtStorage providers and show information about
> -them, similarly to the way  `gnt-os diagose` and `gnt-os info` handle OS
> -definitions.
> +them, similarly to the way  `gnt-os diagnose` and `gnt-os info` handle
> +OS definitions.
>
>  ExtStorage Interface support for userspace access
>  =================================================
> diff --git a/doc/design-ssh-ports.rst b/doc/design-ssh-ports.rst
> index 42c6c30..079607c 100644
> --- a/doc/design-ssh-ports.rst
> +++ b/doc/design-ssh-ports.rst
> @@ -11,7 +11,7 @@ on nodes with non-standard port numbers.
>  Current state and shortcomings
>  ==============================
>
> -All SSH deamons are expected to be running on the default port 22. It has 
> been
> +All SSH daemons are expected to be running on the default port 22. It has 
> been
>  requested by Ganeti users (`Issue 235`_) to allow SSH daemons run on
>  non-standard ports as well.
>
> diff --git a/doc/design-storagetypes.rst b/doc/design-storagetypes.rst
> index f1f022d..ee53c3f 100644
> --- a/doc/design-storagetypes.rst
> +++ b/doc/design-storagetypes.rst
> @@ -42,7 +42,7 @@ the currently implemented disk templates: ``blockdev``, 
> ``diskless``, ``drbd``,
>  ``ext``, ``file``, ``plain``, ``rbd``, and ``sharedfile``. See
>  ``DISK_TEMPLATES`` in ``constants.py``.
>
> -Note that the abovementioned list of enabled disk types is just a "mechanism"
> +Note that the above-mentioned list of enabled disk types is just a 
> "mechanism"
>  parameter that defines which disk templates the cluster can use. Further
>  filtering about what's allowed can go in the ipolicy, which is not covered in
>  this design doc. Note that it is possible to force an instance to use a disk
> diff --git a/doc/design-sync-rate-throttling.rst 
> b/doc/design-sync-rate-throttling.rst
> index 47549e6..abd1f3b 100644
> --- a/doc/design-sync-rate-throttling.rst
> +++ b/doc/design-sync-rate-throttling.rst
> @@ -1,3 +1,4 @@
> +=========================
>  DRBD Sync Rate Throttling
>  =========================
>
> diff --git a/doc/design-upgrade.rst b/doc/design-upgrade.rst
> index 7a02407..018cdc7 100644
> --- a/doc/design-upgrade.rst
> +++ b/doc/design-upgrade.rst
> @@ -50,7 +50,7 @@ These paths will be changed in the following way.
>    ``${PREFIX}/share/ganeti/${VERSION}`` so that they see their respective
>    Ganeti library. ``${PREFIX}/share/ganeti/default`` is a symbolic link to
>    ``${sysconfdir}/ganeti/share`` which, in turn, is a symbolic link to
> -  ``${PREFIX}/share/ganeti/${VERSION}``. For all python executatables (like
> +  ``${PREFIX}/share/ganeti/${VERSION}``. For all python executables (like
>    ``gnt-cluster``, ``gnt-node``, etc) symbolic links going through
>    ``${PREFIX}/share/ganeti/default`` are added under ``${PREFIX}/sbin``.
>
> @@ -67,12 +67,12 @@ These paths will be changed in the following way.
>
>  The set of links for ganeti binaries might change between the versions.
>  However, as the file structure under ``${libdir}/ganeti/${VERSION}`` reflects
> -that of ``/``, two links of differnt versions will never conflict. Similarly,
> +that of ``/``, two links of different versions will never conflict. 
> Similarly,
>  the symbolic links for the python executables will never conflict, as they
>  always point to a file with the same basename directly under
>  ``${PREFIX}/share/ganeti/default``. Therefore, each version will make sure 
> that
>  enough symbolic links are present in ``${PREFIX}/bin``, ``${PREFIX}/sbin`` 
> and
> -so on, even though some might be dangling, if a differnt version of ganeti is
> +so on, even though some might be dangling, if a different version of ganeti 
> is
>  currently active.
>
>  The extra indirection through ``${sysconfdir}`` allows installations that 
> choose
> @@ -201,7 +201,7 @@ following actions.
>
>  - A backup of all Ganeti-related status information is created for
>    manual rollbacks. While the normal way of rolling back after an
> -  upgrade should be calling ``gnt-clsuter upgrade`` from the newer version
> +  upgrade should be calling ``gnt-cluster upgrade`` from the newer version
>    with the older version as argument, a full backup provides an
>    additional safety net, especially for jump-upgrades (skipping
>    intermediate minor versions).
> @@ -252,7 +252,7 @@ rolled back).
>  To achieve this, ``gnt-cluster upgrade`` will support a ``--resume``
>  option. It is recommended
>  to have ``gnt-cluster upgrade --resume`` as an at-reboot task in the crontab.
> -The ``gnt-cluster upgrade --resume`` comand first verifies that
> +The ``gnt-cluster upgrade --resume`` command first verifies that
>  it is running on the master node, using the same requirement as for
>  starting the master daemon, i.e., confirmed by a majority of all
>  nodes. If it is not the master node, it will remove any possibly
> diff --git a/lib/client/gnt_cluster.py b/lib/client/gnt_cluster.py
> index e23fb50..3678c9c 100644
> --- a/lib/client/gnt_cluster.py
> +++ b/lib/client/gnt_cluster.py
> @@ -1172,7 +1172,7 @@ def _RenewCrypto(new_cluster_cert, new_rapi_cert, # 
> pylint: disable=R0911
>    if new_rapi_cert or new_spice_cert or new_confd_hmac_key or new_cds:
>      RunWhileClusterStopped(ToStdout, _RenewCryptoInner)
>
> -  # If only node certficates are recreated, call _RenewClientCerts only.
> +  # If only node certificates are recreated, call _RenewClientCerts only.
>    if new_node_cert and not new_cluster_cert:
>      RunWhileDaemonsStopped(ToStdout, [constants.NODED, constants.WCONFD],
>                             _RenewClientCerts, verbose=verbose, debug=debug)
> diff --git a/lib/client/gnt_job.py b/lib/client/gnt_job.py
> index 3dd4eff..ab20e27 100644
> --- a/lib/client/gnt_job.py
> +++ b/lib/client/gnt_job.py
> @@ -195,7 +195,7 @@ def AutoArchiveJobs(opts, args):
>
>
>  def _MultiJobAction(opts, args, cl, stdout_fn, ask_fn, question, action_fn):
> -  """Applies a function to multipe jobs.
> +  """Applies a function to multiple jobs.
>
>    @param opts: Command line options
>    @type args: list
> diff --git a/man/gnt-cluster.rst b/man/gnt-cluster.rst
> index f34677a..76d64ef 100644
> --- a/man/gnt-cluster.rst
> +++ b/man/gnt-cluster.rst
> @@ -601,7 +601,7 @@ possible to create instances with disk templates that are 
> not enabled in
>  the cluster. It is also not possible to disable a disk template when there
>  are still instances using it. The first disk template in the list of
>  enabled disk template is the default disk template. It will be used for
> -instance creation, if no disk template is requested explicitely.
> +instance creation, if no disk template is requested explicitly.
>
>  The ``--install-image`` option specifies the location of the OS image to
>  use to run the OS scripts inside a virtualized environment. This can be
> @@ -874,7 +874,7 @@ The option ``--new-cluster-certificate`` will regenerate 
> the
>  cluster-internal server SSL certificate. The option
>  ``--new-node-certificates`` will generate new node SSL
>  certificates for all nodes. Note that for the regeneration of
> -of the server SSL certficate will invoke a regeneration of the
> +of the server SSL certificate will invoke a regeneration of the
>  node certificates as well, because node certificates are signed
>  by the server certificate and thus have to be recreated and
>  signed by the new server certificate. Nodes which are offline
> diff --git a/man/mon-collector.rst b/man/mon-collector.rst
> index 0912e81..045745b 100644
> --- a/man/mon-collector.rst
> +++ b/man/mon-collector.rst
> @@ -80,7 +80,7 @@ one:
>    The IP address the ConfD daemon is listening on.
>
>  -p *port-number*, \--port=*port-number*
> -  The port the ConfD deamon is listening on.
> +  The port the ConfD daemon is listening on.
>
>  LOGICAL VOLUMES
>  ~~~~~~~~~~~~~~~
> @@ -100,7 +100,7 @@ where the daemon is listening, in case it's not the 
> default one:
>    The IP address the ConfD daemon is listening on.
>
>  -p *port-number*, \--port=*port-number*
> -  The port the ConfD deamon is listening on.
> +  The port the ConfD daemon is listening on.
>
>  Instead of accessing the live data on the cluster, the tool can also read 
> data
>  serialized on files (mainly for testing purposes). Namely:
> diff --git a/src/Ganeti/Confd/Server.hs b/src/Ganeti/Confd/Server.hs
> index 8eb9182..4941c9a 100644
> --- a/src/Ganeti/Confd/Server.hs
> +++ b/src/Ganeti/Confd/Server.hs
> @@ -175,7 +175,7 @@ buildResponse cdata req@(ConfdRequest { confdRqType = 
> ReqNodeRoleByName }) = do
>            clusterSerial . configCluster $ fst cdata)
>
>  buildResponse cdata (ConfdRequest { confdRqType = ReqNodePipList }) =
> -  -- note: we use foldlWithKey because that's present accross more
> +  -- note: we use foldlWithKey because that's present across more
>    -- versions of the library
>    return (ReplyStatusOk, J.showJSON $
>            M.foldlWithKey (\accu _ n -> nodePrimaryIp n:accu) []
> @@ -183,7 +183,7 @@ buildResponse cdata (ConfdRequest { confdRqType = 
> ReqNodePipList }) =
>            clusterSerial . configCluster $ fst cdata)
>
>  buildResponse cdata (ConfdRequest { confdRqType = ReqMcPipList }) =
> -  -- note: we use foldlWithKey because that's present accross more
> +  -- note: we use foldlWithKey because that's present across more
>    -- versions of the library
>    return (ReplyStatusOk, J.showJSON $
>            M.foldlWithKey (\accu _ n -> if nodeMasterCandidate n
> diff --git a/src/Ganeti/THH.hs b/src/Ganeti/THH.hs
> index b27dcfd..9447710 100644
> --- a/src/Ganeti/THH.hs
> +++ b/src/Ganeti/THH.hs
> @@ -1041,7 +1041,7 @@ buildLens (fnm, fdnm) (rnm, rdnm) nm pfx ar (field, i) 
> = do
>  -- be a JSON object, dispatching on the "forthcoming" key.
>  buildObjectWithForthcoming ::
>    String -- ^ Name of the newly defined type
> -  -> String -- ^ base prefix for field names; for the real and fortcoming
> +  -> String -- ^ base prefix for field names; for the real and forthcoming
>              -- variant, with base prefix will be prefixed with "real"
>              -- and forthcoming, respectively.
>    -> [Field] -- ^ List of fields in the real version
> diff --git a/src/Ganeti/WConfd/ConfigState.hs 
> b/src/Ganeti/WConfd/ConfigState.hs
> index fa6e754..443b9be 100644
> --- a/src/Ganeti/WConfd/ConfigState.hs
> +++ b/src/Ganeti/WConfd/ConfigState.hs
> @@ -70,7 +70,7 @@ bumpSerial :: (SerialNoObjectL a, TimeStampObjectL a) => 
> ClockTime -> a -> a
>  bumpSerial now = set mTimeL now . over serialL succ
>
>  -- | Given two versions of the configuration, determine if its distribution
> --- needs to be fully commited before returning the corresponding call to
> +-- needs to be fully committed before returning the corresponding call to
>  -- WConfD.
>  needsFullDist :: ConfigState -> ConfigState -> Bool
>  needsFullDist = on (/=) (watched . csConfigData)
> --
> 2.8.0.rc3.226.g39d4020
>

Reply via email to