Re: [Xen-devel] [PATCH v3 3/3] docs: add pod variant of xl-numa-placement

2017-07-27 Thread Dario Faggioli
On Wed, 2017-07-26 at 16:39 +0200, Olaf Hering wrote:
> Convert source for xl-numa-placement.7 from markdown to pod.
> This removes the buildtime requirement for pandoc, and subsequently
> the
> need for ghc, in the chain for BuildRequires of xen.rpm.
> 
> Signed-off-by: Olaf Hering 
>
Reviewed-by: Dario Faggioli 

Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R Ltd., Cambridge (UK)

signature.asc
Description: This is a digitally signed message part
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v3 3/3] docs: add pod variant of xl-numa-placement

2017-07-26 Thread Olaf Hering
Convert source for xl-numa-placement.7 from markdown to pod.
This removes the buildtime requirement for pandoc, and subsequently the
need for ghc, in the chain for BuildRequires of xen.rpm.

Signed-off-by: Olaf Hering 
---
 ...lacement.markdown.7 => xl-numa-placement.pod.7} | 166 ++---
 1 file changed, 110 insertions(+), 56 deletions(-)
 rename docs/man/{xl-numa-placement.markdown.7 => xl-numa-placement.pod.7} (74%)

diff --git a/docs/man/xl-numa-placement.markdown.7 
b/docs/man/xl-numa-placement.pod.7
similarity index 74%
rename from docs/man/xl-numa-placement.markdown.7
rename to docs/man/xl-numa-placement.pod.7
index f863492093..54a444172e 100644
--- a/docs/man/xl-numa-placement.markdown.7
+++ b/docs/man/xl-numa-placement.pod.7
@@ -1,6 +1,12 @@
-# Guest Automatic NUMA Placement in libxl and xl #
+=encoding utf8
 
-## Rationale ##
+=head1 NAME
+
+Guest Automatic NUMA Placement in libxl and xl
+
+=head1 DESCRIPTION
+
+=head2 Rationale
 
 NUMA (which stands for Non-Uniform Memory Access) means that the memory
 accessing times of a program running on a CPU depends on the relative
@@ -17,13 +23,14 @@ running memory-intensive workloads on a shared host. In 
fact, the cost
 of accessing non node-local memory locations is very high, and the
 performance degradation is likely to be noticeable.
 
-For more information, have a look at the [Xen NUMA Introduction][numa_intro]
+For more information, have a look at the Lhttp://wiki.xen.org/wiki/Xen_NUMA_Introduction>
 page on the Wiki.
 
-## Xen and NUMA machines: the concept of _node-affinity_ ##
+
+=head2 Xen and NUMA machines: the concept of I
 
 The Xen hypervisor deals with NUMA machines throughout the concept of
-_node-affinity_. The node-affinity of a domain is the set of NUMA nodes
+I. The node-affinity of a domain is the set of NUMA nodes
 of the host where the memory for the domain is being allocated (mostly,
 at domain creation time). This is, at least in principle, different and
 unrelated with the vCPU (hard and soft, see below) scheduling affinity,
@@ -42,15 +49,16 @@ it is very important to "place" the domain correctly when 
it is fist
 created, as the most of its memory is allocated at that time and can
 not (for now) be moved easily.
 
-### Placing via pinning and cpupools ###
+
+=head2 Placing via pinning and cpupools
 
 The simplest way of placing a domain on a NUMA node is setting the hard
 scheduling affinity of the domain's vCPUs to the pCPUs of the node. This
 also goes under the name of vCPU pinning, and can be done through the
 "cpus=" option in the config file (more about this below). Another option
 is to pool together the pCPUs spanning the node and put the domain in
-such a _cpupool_ with the "pool=" config option (as documented in our
-[Wiki][cpupools_howto]).
+such a I with the "pool=" config option (as documented in our
+L).
 
 In both the above cases, the domain will not be able to execute outside
 the specified set of pCPUs for any reasons, even if all those pCPUs are
@@ -59,7 +67,8 @@ busy doing something else while there are others, idle, pCPUs.
 So, when doing this, local memory accesses are 100% guaranteed, but that
 may come at he cost of some load imbalances.
 
-### NUMA aware scheduling ###
+
+=head2 NUMA aware scheduling
 
 If using the credit1 scheduler, and starting from Xen 4.3, the scheduler
 itself always tries to run the domain's vCPUs on one of the nodes in
@@ -87,21 +96,37 @@ workload.
 
 Notice that, for each vCPU, the following three scenarios are possbile:
 
-  * a vCPU *is pinned* to some pCPUs and *does not have* any soft affinity
-In this case, the vCPU is always scheduled on one of the pCPUs to which
-it is pinned, without any specific peference among them.
-  * a vCPU *has* its own soft affinity and *is not* pinned to any particular
-pCPU. In this case, the vCPU can run on every pCPU. Nevertheless, the
-scheduler will try to have it running on one of the pCPUs in its soft
-affinity;
-  * a vCPU *has* its own vCPU soft affinity and *is also* pinned to some
-pCPUs. In this case, the vCPU is always scheduled on one of the pCPUs
-onto which it is pinned, with, among them, a preference for the ones
-that also forms its soft affinity. In case pinning and soft affinity
-form two disjoint sets of pCPUs, pinning "wins", and the soft affinity
-is just ignored.
-
-## Guest placement in xl ##
+=over
+
+=item *
+
+a vCPU I to some pCPUs and I any soft affinity
+In this case, the vCPU is always scheduled on one of the pCPUs to which
+it is pinned, without any specific peference among them.
+
+
+=item *
+
+a vCPU I its own soft affinity and I pinned to any particular
+pCPU. In this case, the vCPU can run on every pCPU. Nevertheless, the
+scheduler will try to have it running on one of the pCPUs in its soft
+affinity;
+
+
+=item *
+
+a vCPU I its own vCPU soft affinity and I pinned to some
+pCPUs. In this