Re: [RFC PATCH 3/3] kernel-doc: kerneldocify workqueue documentation

2016-10-17 Thread Jani Nikula
On Sun, 16 Oct 2016, Silvio Fricke  wrote:
> Only formating changes.
>
> Signed-off-by: Silvio Fricke 
> ---
>  Documentation/index.rst |   1 +-
>  Documentation/workqueue.rst | 394 +-
>  Documentation/workqueue.txt | 388 +
>  MAINTAINERS |   2 +-
>  include/linux/workqueue.h   |   4 +-
>  5 files changed, 398 insertions(+), 391 deletions(-)
>  create mode 100644 Documentation/workqueue.rst
>  delete mode 100644 Documentation/workqueue.txt
>
> diff --git a/Documentation/index.rst b/Documentation/index.rst
> index c53d089..f631655 100644
> --- a/Documentation/index.rst
> +++ b/Documentation/index.rst
> @@ -18,6 +18,7 @@ Contents:
> media/index
> gpu/index
> 80211/index
> +   workqueue
>  
>  Indices and tables
>  ==
> diff --git a/Documentation/workqueue.rst b/Documentation/workqueue.rst
> new file mode 100644
> index 000..03ac70f
> --- /dev/null
> +++ b/Documentation/workqueue.rst
> @@ -0,0 +1,394 @@
> +
> +Concurrency Managed Workqueue (cmwq)
> +
> +
> +:Date: September, 2010
> +:Authors: * Tejun Heo ;
> +  * Florian Mickler ;

I'd imagine

:Author: Tejun Heo 
:Author: Florian Mickler 

will work better.

BR,
Jani.


> +
> +
> +Introduction
> +
> +
> +There are many cases where an asynchronous process execution context
> +is needed and the workqueue (wq) API is the most commonly used
> +mechanism for such cases.
> +
> +When such an asynchronous execution context is needed, a work item
> +describing which function to execute is put on a queue.  An
> +independent thread serves as the asynchronous execution context.  The
> +queue is called workqueue and the thread is called worker.
> +
> +While there are work items on the workqueue the worker executes the
> +functions associated with the work items one after the other.  When
> +there is no work item left on the workqueue the worker becomes idle.
> +When a new work item gets queued, the worker begins executing again.
> +
> +
> +Why cmwq?
> +=
> +
> +In the original wq implementation, a multi threaded (MT) wq had one
> +worker thread per CPU and a single threaded (ST) wq had one worker
> +thread system-wide.  A single MT wq needed to keep around the same
> +number of workers as the number of CPUs.  The kernel grew a lot of MT
> +wq users over the years and with the number of CPU cores continuously
> +rising, some systems saturated the default 32k PID space just booting
> +up.
> +
> +Although MT wq wasted a lot of resource, the level of concurrency
> +provided was unsatisfactory.  The limitation was common to both ST and
> +MT wq albeit less severe on MT.  Each wq maintained its own separate
> +worker pool.  A MT wq could provide only one execution context per CPU
> +while a ST wq one for the whole system.  Work items had to compete for
> +those very limited execution contexts leading to various problems
> +including proneness to deadlocks around the single execution context.
> +
> +The tension between the provided level of concurrency and resource
> +usage also forced its users to make unnecessary tradeoffs like libata
> +choosing to use ST wq for polling PIOs and accepting an unnecessary
> +limitation that no two polling PIOs can progress at the same time.  As
> +MT wq don't provide much better concurrency, users which require
> +higher level of concurrency, like async or fscache, had to implement
> +their own thread pool.
> +
> +Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
> +focus on the following goals.
> +
> +* Maintain compatibility with the original workqueue API.
> +
> +* Use per-CPU unified worker pools shared by all wq to provide
> +  flexible level of concurrency on demand without wasting a lot of
> +  resource.
> +
> +* Automatically regulate worker pool and level of concurrency so that
> +  the API users don't need to worry about such details.
> +
> +
> +The Design
> +==
> +
> +In order to ease the asynchronous execution of functions a new
> +abstraction, the work item, is introduced.
> +
> +A work item is a simple struct that holds a pointer to the function
> +that is to be executed asynchronously.  Whenever a driver or subsystem
> +wants a function to be executed asynchronously it has to set up a work
> +item pointing to that function and queue that work item on a
> +workqueue.
> +
> +Special purpose threads, called worker threads, execute the functions
> +off of the queue, one after the other.  If no work is queued, the
> +worker threads become idle.  These worker threads are managed in so
> +called worker-pools.
> +
> +The cmwq design differentiates between the user-facing workqueues that
> +subsystems and drivers queue work items on and the backend mechanism
> +which manages worker-pools and processes the queued work items.
> +
> +There are two worker-pools, one for normal work items and th

[RFC PATCH 3/3] kernel-doc: kerneldocify workqueue documentation

2016-10-16 Thread Silvio Fricke
Only formating changes.

Signed-off-by: Silvio Fricke 
---
 Documentation/index.rst |   1 +-
 Documentation/workqueue.rst | 394 +-
 Documentation/workqueue.txt | 388 +
 MAINTAINERS |   2 +-
 include/linux/workqueue.h   |   4 +-
 5 files changed, 398 insertions(+), 391 deletions(-)
 create mode 100644 Documentation/workqueue.rst
 delete mode 100644 Documentation/workqueue.txt

diff --git a/Documentation/index.rst b/Documentation/index.rst
index c53d089..f631655 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -18,6 +18,7 @@ Contents:
media/index
gpu/index
80211/index
+   workqueue
 
 Indices and tables
 ==
diff --git a/Documentation/workqueue.rst b/Documentation/workqueue.rst
new file mode 100644
index 000..03ac70f
--- /dev/null
+++ b/Documentation/workqueue.rst
@@ -0,0 +1,394 @@
+
+Concurrency Managed Workqueue (cmwq)
+
+
+:Date: September, 2010
+:Authors: * Tejun Heo ;
+  * Florian Mickler ;
+
+
+Introduction
+
+
+There are many cases where an asynchronous process execution context
+is needed and the workqueue (wq) API is the most commonly used
+mechanism for such cases.
+
+When such an asynchronous execution context is needed, a work item
+describing which function to execute is put on a queue.  An
+independent thread serves as the asynchronous execution context.  The
+queue is called workqueue and the thread is called worker.
+
+While there are work items on the workqueue the worker executes the
+functions associated with the work items one after the other.  When
+there is no work item left on the workqueue the worker becomes idle.
+When a new work item gets queued, the worker begins executing again.
+
+
+Why cmwq?
+=
+
+In the original wq implementation, a multi threaded (MT) wq had one
+worker thread per CPU and a single threaded (ST) wq had one worker
+thread system-wide.  A single MT wq needed to keep around the same
+number of workers as the number of CPUs.  The kernel grew a lot of MT
+wq users over the years and with the number of CPU cores continuously
+rising, some systems saturated the default 32k PID space just booting
+up.
+
+Although MT wq wasted a lot of resource, the level of concurrency
+provided was unsatisfactory.  The limitation was common to both ST and
+MT wq albeit less severe on MT.  Each wq maintained its own separate
+worker pool.  A MT wq could provide only one execution context per CPU
+while a ST wq one for the whole system.  Work items had to compete for
+those very limited execution contexts leading to various problems
+including proneness to deadlocks around the single execution context.
+
+The tension between the provided level of concurrency and resource
+usage also forced its users to make unnecessary tradeoffs like libata
+choosing to use ST wq for polling PIOs and accepting an unnecessary
+limitation that no two polling PIOs can progress at the same time.  As
+MT wq don't provide much better concurrency, users which require
+higher level of concurrency, like async or fscache, had to implement
+their own thread pool.
+
+Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
+focus on the following goals.
+
+* Maintain compatibility with the original workqueue API.
+
+* Use per-CPU unified worker pools shared by all wq to provide
+  flexible level of concurrency on demand without wasting a lot of
+  resource.
+
+* Automatically regulate worker pool and level of concurrency so that
+  the API users don't need to worry about such details.
+
+
+The Design
+==
+
+In order to ease the asynchronous execution of functions a new
+abstraction, the work item, is introduced.
+
+A work item is a simple struct that holds a pointer to the function
+that is to be executed asynchronously.  Whenever a driver or subsystem
+wants a function to be executed asynchronously it has to set up a work
+item pointing to that function and queue that work item on a
+workqueue.
+
+Special purpose threads, called worker threads, execute the functions
+off of the queue, one after the other.  If no work is queued, the
+worker threads become idle.  These worker threads are managed in so
+called worker-pools.
+
+The cmwq design differentiates between the user-facing workqueues that
+subsystems and drivers queue work items on and the backend mechanism
+which manages worker-pools and processes the queued work items.
+
+There are two worker-pools, one for normal work items and the other
+for high priority ones, for each possible CPU and some extra
+worker-pools to serve work items queued on unbound workqueues - the
+number of these backing pools is dynamic.
+
+Subsystems and drivers can create and queue work items through special
+workqueue API functions as they see fit. They can influence some
+aspects of the way the work items are executed by