Re: [PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-28 Thread Tejun Heo
Hello,

On Tue, Jul 29, 2014 at 09:26:35AM +0800, Lai Jiangshan wrote:
> > It's a bit difficult to get excited about this patchset given that
> > this is mostly churn without many actual benefits.  Sure, it
> > consolidates one-per-pool managers into one kthread_worker but it was
> > one-per-pool already.  That said, I don't hate it and it may be
> > considered an improvement.  I don't know.
> 
> It prefers to processing works rather than creating worker without any
> loss of the guarantee.
> 
> processing works makes directly progress for the system.
> creating worker makes delay and indirectly progress.

That's misleading, isn't it?  Both process work items the same.  The
only difference is per-pool manager ends up using more tasks, thus a
bit more memory, doing it.  There really is no signficant behavior
difference between the two schemes except for how many tasks end up
serving as the manager.

> > This is kinda silly when the duty of worker creation is served by an
> > external entity.  Why would a pool need any idle worker?
> 
> The idle worker must be ready or being prepared for wq_worker_sleeping()
> or chained-wake-up.
> 
> percpu-kthreadd can serve for wq_worker_sleeping() in this case, but it is
> not a good idle to introduce percpu-kthreadd now since no other user.

Hmmm... I'm not really sure what we're getting with this.  It doesn't
look much simpler to me.  :(

Lai, I don't know.  If this ends up simplifying things significantly,
sure, but as it currently stands, I can't see why we'd need to do
this.  If you wanna pursue this, please try to make it more
beneficial.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-28 Thread Lai Jiangshan
On 07/29/2014 02:55 AM, Tejun Heo wrote:
> Hello, Lai.
> 
> On Sat, Jul 26, 2014 at 11:04:50AM +0800, Lai Jiangshan wrote:
>> There are some problems with the managers:
>>   1) The last idle worker prefer managing to processing.
>>  It is better that the processing of work items should be the first
>>  priority to make the whole system make progress earlier.
>>   2) managers among different pools can be parallel, but actually
>>  their major work are serialized on the kernel thread "kthreadd".
>>  These managers are sleeping and wasted when the system is lack
>>  of memory.
> 
> It's a bit difficult to get excited about this patchset given that
> this is mostly churn without many actual benefits.  Sure, it
> consolidates one-per-pool managers into one kthread_worker but it was
> one-per-pool already.  That said, I don't hate it and it may be
> considered an improvement.  I don't know.

It prefers to processing works rather than creating worker without any
loss of the guarantee.

processing works makes directly progress for the system.
creating worker makes delay and indirectly progress.

> 
>> This patch introduces a dedicated creater kthread which offloads the
>> managing from the workers, thus every worker makes effort to process work
>> rather than create worker, and there is no manager wasting on sleeping
>> when the system is lack of memory.  This dedicated creater kthread causes
>> a little more work serialized than before, but it is acceptable.
> 
> I do dislike the idea of creating this sort of hard serialization
> among separate worker pools.  It's probably acceptable but why are we
> doing this?  


I was about to use per-cpu kthread_worker, but I changed to use single
global kthread_worker after I had noticed the kthread_create() is already
serialized in kthreadd.

> To save one thread per pool and shed 30 lines of code
> while adding 40 lines to kthread_worker?

Countings are different between us!
Simplicity was also my aim in this patchset and total LOC was reduced.

> 
>>   1) the creater_work
>> Every pool has a struct kthread_work creater_work to create worker, and
>> the dedicated creater kthread processes all these creater_works of
>> all pools. struct kthread_work has itself execution exclusion, so we don't
>> need the manager_arb to handle the creating exclusion any more.
> 
> This explanation can be a bit misleading, I think.  We just don't have
> multiple workers trying to become managers anymore.
> 
>> put_unbound_pool() uses the flush_kthread_work() to synchronize with
>> the creating rather than uses the manager_arb.
>>
>>   2) the cooldown timer
>> The cooldown timer is introduced to implement the cool-down mechanism
>> rather than to causes the creater to sleep.  When the create_worker()
>> fails, the cooldown timer is requested and it will restart the creater_work.
> 
> Why?  Why doesn't the creator sleep?
> 
> ...
>>   5) the routine of the creater_work
>> The creater_work creates a worker in these two conditions:
>>   A) pool->nr_idle == 0
>>  A new worker is needed to be created obviously even there is no
>>  work item pending.  The busy workers may sleep and the pool can't
>>  serves the future new work items if no new worker is standby or
>>  being created.
> 
> This is kinda silly when the duty of worker creation is served by an
> external entity.  Why would a pool need any idle worker?

The idle worker must be ready or being prepared for wq_worker_sleeping()
or chained-wake-up.

percpu-kthreadd can serve for wq_worker_sleeping() in this case, but it is
not a good idle to introduce percpu-kthreadd now since no other user.

> insert_work() may as well just queue worker creation on demand.

Yes, it can save some workers for idle-unbound-pool.

> 
>>   8) init_workqueues()
>> The struct kthread_worker kworker_creater is initialized earlier than
>> worker_pools in init_workqueues() so that kworker_creater_thread is
>> created than all early kworkers.  Although the early kworkers are not
>> depends on kworker_creater_thread, but this initialization order makes
>> the pid of kworker_creater_thread smaller than kworkers which
>> seems more smooth.
> 
> Just don't create idle workers?

It does a good idea.

Thanks for review and comments.
Lai
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-28 Thread Tejun Heo
Hello, Lai.

On Sat, Jul 26, 2014 at 11:04:50AM +0800, Lai Jiangshan wrote:
> There are some problems with the managers:
>   1) The last idle worker prefer managing to processing.
>  It is better that the processing of work items should be the first
>  priority to make the whole system make progress earlier.
>   2) managers among different pools can be parallel, but actually
>  their major work are serialized on the kernel thread "kthreadd".
>  These managers are sleeping and wasted when the system is lack
>  of memory.

It's a bit difficult to get excited about this patchset given that
this is mostly churn without many actual benefits.  Sure, it
consolidates one-per-pool managers into one kthread_worker but it was
one-per-pool already.  That said, I don't hate it and it may be
considered an improvement.  I don't know.

> This patch introduces a dedicated creater kthread which offloads the
> managing from the workers, thus every worker makes effort to process work
> rather than create worker, and there is no manager wasting on sleeping
> when the system is lack of memory.  This dedicated creater kthread causes
> a little more work serialized than before, but it is acceptable.

I do dislike the idea of creating this sort of hard serialization
among separate worker pools.  It's probably acceptable but why are we
doing this?  To save one thread per pool and shed 30 lines of code
while adding 40 lines to kthread_worker?

>   1) the creater_work
> Every pool has a struct kthread_work creater_work to create worker, and
> the dedicated creater kthread processes all these creater_works of
> all pools. struct kthread_work has itself execution exclusion, so we don't
> need the manager_arb to handle the creating exclusion any more.

This explanation can be a bit misleading, I think.  We just don't have
multiple workers trying to become managers anymore.

> put_unbound_pool() uses the flush_kthread_work() to synchronize with
> the creating rather than uses the manager_arb.
> 
>   2) the cooldown timer
> The cooldown timer is introduced to implement the cool-down mechanism
> rather than to causes the creater to sleep.  When the create_worker()
> fails, the cooldown timer is requested and it will restart the creater_work.

Why?  Why doesn't the creator sleep?

...
>   5) the routine of the creater_work
> The creater_work creates a worker in these two conditions:
>   A) pool->nr_idle == 0
>  A new worker is needed to be created obviously even there is no
>  work item pending.  The busy workers may sleep and the pool can't
>  serves the future new work items if no new worker is standby or
>  being created.

This is kinda silly when the duty of worker creation is served by an
external entity.  Why would a pool need any idle worker?
insert_work() may as well just queue worker creation on demand.

>   8) init_workqueues()
> The struct kthread_worker kworker_creater is initialized earlier than
> worker_pools in init_workqueues() so that kworker_creater_thread is
> created than all early kworkers.  Although the early kworkers are not
> depends on kworker_creater_thread, but this initialization order makes
> the pid of kworker_creater_thread smaller than kworkers which
> seems more smooth.

Just don't create idle workers?

Overall, I don't know.  I like some aspects of it but there's nothing
really convincing about this change.  It sure is different.  Maybe
this is more straight-forward but I'm not sure this is worth the
changes.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-28 Thread Tejun Heo
Hello, Lai.

On Sat, Jul 26, 2014 at 11:04:50AM +0800, Lai Jiangshan wrote:
 There are some problems with the managers:
   1) The last idle worker prefer managing to processing.
  It is better that the processing of work items should be the first
  priority to make the whole system make progress earlier.
   2) managers among different pools can be parallel, but actually
  their major work are serialized on the kernel thread kthreadd.
  These managers are sleeping and wasted when the system is lack
  of memory.

It's a bit difficult to get excited about this patchset given that
this is mostly churn without many actual benefits.  Sure, it
consolidates one-per-pool managers into one kthread_worker but it was
one-per-pool already.  That said, I don't hate it and it may be
considered an improvement.  I don't know.

 This patch introduces a dedicated creater kthread which offloads the
 managing from the workers, thus every worker makes effort to process work
 rather than create worker, and there is no manager wasting on sleeping
 when the system is lack of memory.  This dedicated creater kthread causes
 a little more work serialized than before, but it is acceptable.

I do dislike the idea of creating this sort of hard serialization
among separate worker pools.  It's probably acceptable but why are we
doing this?  To save one thread per pool and shed 30 lines of code
while adding 40 lines to kthread_worker?

   1) the creater_work
 Every pool has a struct kthread_work creater_work to create worker, and
 the dedicated creater kthread processes all these creater_works of
 all pools. struct kthread_work has itself execution exclusion, so we don't
 need the manager_arb to handle the creating exclusion any more.

This explanation can be a bit misleading, I think.  We just don't have
multiple workers trying to become managers anymore.

 put_unbound_pool() uses the flush_kthread_work() to synchronize with
 the creating rather than uses the manager_arb.
 
   2) the cooldown timer
 The cooldown timer is introduced to implement the cool-down mechanism
 rather than to causes the creater to sleep.  When the create_worker()
 fails, the cooldown timer is requested and it will restart the creater_work.

Why?  Why doesn't the creator sleep?

...
   5) the routine of the creater_work
 The creater_work creates a worker in these two conditions:
   A) pool-nr_idle == 0
  A new worker is needed to be created obviously even there is no
  work item pending.  The busy workers may sleep and the pool can't
  serves the future new work items if no new worker is standby or
  being created.

This is kinda silly when the duty of worker creation is served by an
external entity.  Why would a pool need any idle worker?
insert_work() may as well just queue worker creation on demand.

   8) init_workqueues()
 The struct kthread_worker kworker_creater is initialized earlier than
 worker_pools in init_workqueues() so that kworker_creater_thread is
 created than all early kworkers.  Although the early kworkers are not
 depends on kworker_creater_thread, but this initialization order makes
 the pid of kworker_creater_thread smaller than kworkers which
 seems more smooth.

Just don't create idle workers?

Overall, I don't know.  I like some aspects of it but there's nothing
really convincing about this change.  It sure is different.  Maybe
this is more straight-forward but I'm not sure this is worth the
changes.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-28 Thread Lai Jiangshan
On 07/29/2014 02:55 AM, Tejun Heo wrote:
 Hello, Lai.
 
 On Sat, Jul 26, 2014 at 11:04:50AM +0800, Lai Jiangshan wrote:
 There are some problems with the managers:
   1) The last idle worker prefer managing to processing.
  It is better that the processing of work items should be the first
  priority to make the whole system make progress earlier.
   2) managers among different pools can be parallel, but actually
  their major work are serialized on the kernel thread kthreadd.
  These managers are sleeping and wasted when the system is lack
  of memory.
 
 It's a bit difficult to get excited about this patchset given that
 this is mostly churn without many actual benefits.  Sure, it
 consolidates one-per-pool managers into one kthread_worker but it was
 one-per-pool already.  That said, I don't hate it and it may be
 considered an improvement.  I don't know.

It prefers to processing works rather than creating worker without any
loss of the guarantee.

processing works makes directly progress for the system.
creating worker makes delay and indirectly progress.

 
 This patch introduces a dedicated creater kthread which offloads the
 managing from the workers, thus every worker makes effort to process work
 rather than create worker, and there is no manager wasting on sleeping
 when the system is lack of memory.  This dedicated creater kthread causes
 a little more work serialized than before, but it is acceptable.
 
 I do dislike the idea of creating this sort of hard serialization
 among separate worker pools.  It's probably acceptable but why are we
 doing this?  


I was about to use per-cpu kthread_worker, but I changed to use single
global kthread_worker after I had noticed the kthread_create() is already
serialized in kthreadd.

 To save one thread per pool and shed 30 lines of code
 while adding 40 lines to kthread_worker?

Countings are different between us!
Simplicity was also my aim in this patchset and total LOC was reduced.

 
   1) the creater_work
 Every pool has a struct kthread_work creater_work to create worker, and
 the dedicated creater kthread processes all these creater_works of
 all pools. struct kthread_work has itself execution exclusion, so we don't
 need the manager_arb to handle the creating exclusion any more.
 
 This explanation can be a bit misleading, I think.  We just don't have
 multiple workers trying to become managers anymore.
 
 put_unbound_pool() uses the flush_kthread_work() to synchronize with
 the creating rather than uses the manager_arb.

   2) the cooldown timer
 The cooldown timer is introduced to implement the cool-down mechanism
 rather than to causes the creater to sleep.  When the create_worker()
 fails, the cooldown timer is requested and it will restart the creater_work.
 
 Why?  Why doesn't the creator sleep?
 
 ...
   5) the routine of the creater_work
 The creater_work creates a worker in these two conditions:
   A) pool-nr_idle == 0
  A new worker is needed to be created obviously even there is no
  work item pending.  The busy workers may sleep and the pool can't
  serves the future new work items if no new worker is standby or
  being created.
 
 This is kinda silly when the duty of worker creation is served by an
 external entity.  Why would a pool need any idle worker?

The idle worker must be ready or being prepared for wq_worker_sleeping()
or chained-wake-up.

percpu-kthreadd can serve for wq_worker_sleeping() in this case, but it is
not a good idle to introduce percpu-kthreadd now since no other user.

 insert_work() may as well just queue worker creation on demand.

Yes, it can save some workers for idle-unbound-pool.

 
   8) init_workqueues()
 The struct kthread_worker kworker_creater is initialized earlier than
 worker_pools in init_workqueues() so that kworker_creater_thread is
 created than all early kworkers.  Although the early kworkers are not
 depends on kworker_creater_thread, but this initialization order makes
 the pid of kworker_creater_thread smaller than kworkers which
 seems more smooth.
 
 Just don't create idle workers?

It does a good idea.

Thanks for review and comments.
Lai
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-28 Thread Tejun Heo
Hello,

On Tue, Jul 29, 2014 at 09:26:35AM +0800, Lai Jiangshan wrote:
  It's a bit difficult to get excited about this patchset given that
  this is mostly churn without many actual benefits.  Sure, it
  consolidates one-per-pool managers into one kthread_worker but it was
  one-per-pool already.  That said, I don't hate it and it may be
  considered an improvement.  I don't know.
 
 It prefers to processing works rather than creating worker without any
 loss of the guarantee.
 
 processing works makes directly progress for the system.
 creating worker makes delay and indirectly progress.

That's misleading, isn't it?  Both process work items the same.  The
only difference is per-pool manager ends up using more tasks, thus a
bit more memory, doing it.  There really is no signficant behavior
difference between the two schemes except for how many tasks end up
serving as the manager.

  This is kinda silly when the duty of worker creation is served by an
  external entity.  Why would a pool need any idle worker?
 
 The idle worker must be ready or being prepared for wq_worker_sleeping()
 or chained-wake-up.
 
 percpu-kthreadd can serve for wq_worker_sleeping() in this case, but it is
 not a good idle to introduce percpu-kthreadd now since no other user.

Hmmm... I'm not really sure what we're getting with this.  It doesn't
look much simpler to me.  :(

Lai, I don't know.  If this ends up simplifying things significantly,
sure, but as it currently stands, I can't see why we'd need to do
this.  If you wanna pursue this, please try to make it more
beneficial.

Thanks.

-- 
tejun
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-25 Thread Lai Jiangshan
There are some problems with the managers:
  1) The last idle worker prefer managing to processing.
 It is better that the processing of work items should be the first
 priority to make the whole system make progress earlier.
  2) managers among different pools can be parallel, but actually
 their major work are serialized on the kernel thread "kthreadd".
 These managers are sleeping and wasted when the system is lack
 of memory.

This patch introduces a dedicated creater kthread which offloads the
managing from the workers, thus every worker makes effort to process work
rather than create worker, and there is no manager wasting on sleeping
when the system is lack of memory.  This dedicated creater kthread causes
a little more work serialized than before, but it is acceptable.

Implement Detail:
  1) the creater_work:  do the creation, creation exclusion
  2) the cooldown timer:cool down when fail
  3) the mayday timer:  dismiss itself when pool->nr_idle > 0
  4) the semantic for "pool->nr_idle == 0" active the creater_work, cooldown 
and mayday timer
  5) the routine of the creater_work:   create worker on two conditions
  6) worker_thread():   start management without 
unlock/wait/recheck/retry
  7) put_unbound_pool():group destruction code by functionality
  8) init_workqueues(): init creater kthread earlier than pools/workers

  1) the creater_work
Every pool has a struct kthread_work creater_work to create worker, and
the dedicated creater kthread processes all these creater_works of
all pools. struct kthread_work has itself execution exclusion, so we don't
need the manager_arb to handle the creating exclusion any more.
put_unbound_pool() uses the flush_kthread_work() to synchronize with
the creating rather than uses the manager_arb.

  2) the cooldown timer
The cooldown timer is introduced to implement the cool-down mechanism
rather than to causes the creater to sleep.  When the create_worker()
fails, the cooldown timer is requested and it will restart the creater_work.

  3) the mayday timer
The mayday timer is changed so it doesn't restart itself when
pool->nr_idle > 0.  If it always restarts itself as before, we will add
a lot of complication in creater_work.  The creater_work will need to
call del_timer_sync() after successful creation and grab the pool->lock
to check and restart the timer when pool->nr_idle becomes zero.
We don't want that complication, let the timer dismiss itself.

  4) the semantic for "pool->nr_idle == 0"
Any moment when pool->nr_idle == 0, the pool must on the creating state.
The mayday timer must be active (pending or running) to ensure the mayday
can be sent, and at least one of the creater_work or the cooldown timer
must be active to ensure the creating is in progress or standby.  So the
last worker who causes the pool->nr_idle reduce to 0 has the responsibility
to kick the mayday timer and the creater_work. And may_start_working()
becomes misleading due to the meaning of "!pool->nr_idle" is changed and
any worker should start working in spite of the value of pool->nr_idle.
may_start_working() will be cleanup in the next patch with the intention
that the size of this patch can be shorter for reviewing.

  5) the routine of the creater_work
The creater_work creates a worker in these two conditions:
  A) pool->nr_idle == 0
 A new worker is needed to be created obviously even there is no
 work item pending.  The busy workers may sleep and the pool can't
 serves the future new work items if no new worker is standby or
 being created.
  B) pool->nr_idle == 1 && pool->nr_running == 0
 It should create a worker but not strictly needed since we still
 have a free idle worker and it can restart the creation when it goes
 to busy.  But if we don't create worker in this condition, this
 condition may occur frequently.  If a work item is queued and the
 last idle worker starts creater_work.  But if the work item is
 finished before the creater_work, this condition happens again,
 and it can happen again and again in this way.  So we had better
 to create a worker in this condition.
The creater_work will quit directly in other conditions.

  6) worker_thread()
There is no recheck nor retry when creating is required in worker_thread(),
it just kicks out the mayday timer and the creater work and goes to
process the work items directly.

  7) put_unbound_pool()
put_unbound_pool() groups code by functionality not by the name, it
stops creation activity (creater_work, cooldown_timer, mayday_timer)
at first and then stops idle workers and idle_tiemr.

  8) init_workqueues()
The struct kthread_worker kworker_creater is initialized earlier than
worker_pools in init_workqueues() so that kworker_creater_thread is
created than all early kworkers.  Although the early kworkers are not
depends on kworker_creater_thread, but this initialization order makes
the pid of 

[PATCH 2/3] workqueue: use dedicated creater kthread for all pools

2014-07-25 Thread Lai Jiangshan
There are some problems with the managers:
  1) The last idle worker prefer managing to processing.
 It is better that the processing of work items should be the first
 priority to make the whole system make progress earlier.
  2) managers among different pools can be parallel, but actually
 their major work are serialized on the kernel thread kthreadd.
 These managers are sleeping and wasted when the system is lack
 of memory.

This patch introduces a dedicated creater kthread which offloads the
managing from the workers, thus every worker makes effort to process work
rather than create worker, and there is no manager wasting on sleeping
when the system is lack of memory.  This dedicated creater kthread causes
a little more work serialized than before, but it is acceptable.

Implement Detail:
  1) the creater_work:  do the creation, creation exclusion
  2) the cooldown timer:cool down when fail
  3) the mayday timer:  dismiss itself when pool-nr_idle  0
  4) the semantic for pool-nr_idle == 0 active the creater_work, cooldown 
and mayday timer
  5) the routine of the creater_work:   create worker on two conditions
  6) worker_thread():   start management without 
unlock/wait/recheck/retry
  7) put_unbound_pool():group destruction code by functionality
  8) init_workqueues(): init creater kthread earlier than pools/workers

  1) the creater_work
Every pool has a struct kthread_work creater_work to create worker, and
the dedicated creater kthread processes all these creater_works of
all pools. struct kthread_work has itself execution exclusion, so we don't
need the manager_arb to handle the creating exclusion any more.
put_unbound_pool() uses the flush_kthread_work() to synchronize with
the creating rather than uses the manager_arb.

  2) the cooldown timer
The cooldown timer is introduced to implement the cool-down mechanism
rather than to causes the creater to sleep.  When the create_worker()
fails, the cooldown timer is requested and it will restart the creater_work.

  3) the mayday timer
The mayday timer is changed so it doesn't restart itself when
pool-nr_idle  0.  If it always restarts itself as before, we will add
a lot of complication in creater_work.  The creater_work will need to
call del_timer_sync() after successful creation and grab the pool-lock
to check and restart the timer when pool-nr_idle becomes zero.
We don't want that complication, let the timer dismiss itself.

  4) the semantic for pool-nr_idle == 0
Any moment when pool-nr_idle == 0, the pool must on the creating state.
The mayday timer must be active (pending or running) to ensure the mayday
can be sent, and at least one of the creater_work or the cooldown timer
must be active to ensure the creating is in progress or standby.  So the
last worker who causes the pool-nr_idle reduce to 0 has the responsibility
to kick the mayday timer and the creater_work. And may_start_working()
becomes misleading due to the meaning of !pool-nr_idle is changed and
any worker should start working in spite of the value of pool-nr_idle.
may_start_working() will be cleanup in the next patch with the intention
that the size of this patch can be shorter for reviewing.

  5) the routine of the creater_work
The creater_work creates a worker in these two conditions:
  A) pool-nr_idle == 0
 A new worker is needed to be created obviously even there is no
 work item pending.  The busy workers may sleep and the pool can't
 serves the future new work items if no new worker is standby or
 being created.
  B) pool-nr_idle == 1  pool-nr_running == 0
 It should create a worker but not strictly needed since we still
 have a free idle worker and it can restart the creation when it goes
 to busy.  But if we don't create worker in this condition, this
 condition may occur frequently.  If a work item is queued and the
 last idle worker starts creater_work.  But if the work item is
 finished before the creater_work, this condition happens again,
 and it can happen again and again in this way.  So we had better
 to create a worker in this condition.
The creater_work will quit directly in other conditions.

  6) worker_thread()
There is no recheck nor retry when creating is required in worker_thread(),
it just kicks out the mayday timer and the creater work and goes to
process the work items directly.

  7) put_unbound_pool()
put_unbound_pool() groups code by functionality not by the name, it
stops creation activity (creater_work, cooldown_timer, mayday_timer)
at first and then stops idle workers and idle_tiemr.

  8) init_workqueues()
The struct kthread_worker kworker_creater is initialized earlier than
worker_pools in init_workqueues() so that kworker_creater_thread is
created than all early kworkers.  Although the early kworkers are not
depends on kworker_creater_thread, but this initialization order makes
the pid of kworker_creater_thread smaller than