Re: [Xen-devel] [PATCH v2] xen: xen-pciback: Remove create_workqueue

2016-06-28 Thread Bhaktipriya Shridhar
Ping!
Thanks,
Bhaktipriya


On Wed, Jun 1, 2016 at 9:15 PM, Tejun Heo <t...@kernel.org> wrote:
> On Wed, Jun 01, 2016 at 07:45:08PM +0530, Bhaktipriya Shridhar wrote:
>> System workqueues have been able to handle high level of concurrency
>> for a long time now and there's no reason to use dedicated workqueues
>> just to gain concurrency.  Replace dedicated xen_pcibk_wq with the
>> use of system_wq.
>>
>> Unlike a dedicated per-cpu workqueue created with create_workqueue(),
>> system_wq allows multiple work items to overlap executions even on
>> the same CPU; however, a per-cpu workqueue doesn't have any CPU
>> locality or global ordering guarantees unless the target CPU is
>> explicitly specified and thus the increase of local concurrency shouldn't
>> make any difference.
>>
>> Since the work items could be pending, flush_work() has been used in
>> xen_pcibk_disconnect(). xen_pcibk_xenbus_remove() calls free_pdev()
>> which in turn calls xen_pcibk_disconnect() for every pdev to ensure that
>> there is no pending task while disconnecting the driver.
>>
>> Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
>
> Acked-by: Tejun Heo <t...@kernel.org>
>
> Thanks.
>
> --
> tejun

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v2] xen: xenbus: Remove create_workqueue

2016-06-28 Thread Bhaktipriya Shridhar
Ping!
Thanks,
Bhaktipriya


On Tue, May 31, 2016 at 10:59 PM, Tejun Heo <t...@kernel.org> wrote:
> On Tue, May 31, 2016 at 10:26:30PM +0530, Bhaktipriya Shridhar wrote:
>> System workqueues have been able to handle high level of concurrency
>> for a long time now and there's no reason to use dedicated workqueues
>> just to gain concurrency.  Replace dedicated xenbus_frontend_wq with the
>> use of system_wq.
>>
>> Unlike a dedicated per-cpu workqueue created with create_workqueue(),
>> system_wq allows multiple work items to overlap executions even on
>> the same CPU; however, a per-cpu workqueue doesn't have any CPU
>> locality or global ordering guarantees unless the target CPU is
>> explicitly specified and the increase of local concurrency shouldn't
>> make any difference.
>>
>> In this case, there is only a single work item, increase of concurrency
>> level by switching to system_wq should not make any difference.
>>
>> Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
>
> Acked-by: Tejun Heo <t...@kernel.org>
>
> Thanks.
>
> --
> tejun

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2] xen: xen-pciback: Remove create_workqueue

2016-06-01 Thread Bhaktipriya Shridhar
System workqueues have been able to handle high level of concurrency
for a long time now and there's no reason to use dedicated workqueues
just to gain concurrency.  Replace dedicated xen_pcibk_wq with the
use of system_wq.

Unlike a dedicated per-cpu workqueue created with create_workqueue(),
system_wq allows multiple work items to overlap executions even on
the same CPU; however, a per-cpu workqueue doesn't have any CPU
locality or global ordering guarantees unless the target CPU is
explicitly specified and thus the increase of local concurrency shouldn't
make any difference.

Since the work items could be pending, flush_work() has been used in
xen_pcibk_disconnect(). xen_pcibk_xenbus_remove() calls free_pdev()
which in turn calls xen_pcibk_disconnect() for every pdev to ensure that
there is no pending task while disconnecting the driver.

Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
---
 Changes in v2:
-Changed cancel_work_sync to flush_work
-Changed commit description

 Feedback to update a comment was received in v1 from David Vrabel. It has not
 be included in v2 since some clarification was required. Will include it in
 v3 once the details about the content and the placement of the comment are
 received.

 drivers/xen/xen-pciback/pciback.h |  1 -
 drivers/xen/xen-pciback/pciback_ops.c |  2 +-
 drivers/xen/xen-pciback/xenbus.c  | 10 +-
 3 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/xen-pciback/pciback.h 
b/drivers/xen/xen-pciback/pciback.h
index 4d529f3..7af369b6 100644
--- a/drivers/xen/xen-pciback/pciback.h
+++ b/drivers/xen/xen-pciback/pciback.h
@@ -55,7 +55,6 @@ struct xen_pcibk_dev_data {

 /* Used by XenBus and xen_pcibk_ops.c */
 extern wait_queue_head_t xen_pcibk_aer_wait_queue;
-extern struct workqueue_struct *xen_pcibk_wq;
 /* Used by pcistub.c and conf_space_quirks.c */
 extern struct list_head xen_pcibk_quirks;

diff --git a/drivers/xen/xen-pciback/pciback_ops.c 
b/drivers/xen/xen-pciback/pciback_ops.c
index 2f19dd7..f8c7775 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -310,7 +310,7 @@ void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device 
*pdev)
 * already processing a request */
if (test_bit(_XEN_PCIF_active, (unsigned long *)>sh_info->flags)
&& !test_and_set_bit(_PDEVF_op_active, >flags)) {
-   queue_work(xen_pcibk_wq, >op_work);
+   schedule_work(>op_work);
}
/*_XEN_PCIB_active should have been cleared by pcifront. And also make
sure xen_pcibk is waiting for ack by checking _PCIB_op_pending*/
diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index c252eb3..5ce878c 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -17,7 +17,6 @@
 #include "pciback.h"

 #define INVALID_EVTCHN_IRQ  (-1)
-struct workqueue_struct *xen_pcibk_wq;

 static bool __read_mostly passthrough;
 module_param(passthrough, bool, S_IRUGO);
@@ -76,8 +75,7 @@ static void xen_pcibk_disconnect(struct xen_pcibk_device 
*pdev)
/* If the driver domain started an op, make sure we complete it
 * before releasing the shared memory */

-   /* Note, the workqueue does not use spinlocks at all.*/
-   flush_workqueue(xen_pcibk_wq);
+   flush_work(>op_work);

if (pdev->sh_info != NULL) {
xenbus_unmap_ring_vfree(pdev->xdev, pdev->sh_info);
@@ -733,11 +731,6 @@ const struct xen_pcibk_backend *__read_mostly 
xen_pcibk_backend;

 int __init xen_pcibk_xenbus_register(void)
 {
-   xen_pcibk_wq = create_workqueue("xen_pciback_workqueue");
-   if (!xen_pcibk_wq) {
-   pr_err("%s: create xen_pciback_workqueue failed\n", __func__);
-   return -EFAULT;
-   }
xen_pcibk_backend = _pcibk_vpci_backend;
if (passthrough)
xen_pcibk_backend = _pcibk_passthrough_backend;
@@ -747,6 +740,5 @@ int __init xen_pcibk_xenbus_register(void)

 void __exit xen_pcibk_xenbus_unregister(void)
 {
-   destroy_workqueue(xen_pcibk_wq);
xenbus_unregister_driver(_pcibk_driver);
 }
--
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH v2] xen: xenbus: Remove create_workqueue

2016-05-31 Thread Bhaktipriya Shridhar
System workqueues have been able to handle high level of concurrency
for a long time now and there's no reason to use dedicated workqueues
just to gain concurrency.  Replace dedicated xenbus_frontend_wq with the
use of system_wq.

Unlike a dedicated per-cpu workqueue created with create_workqueue(),
system_wq allows multiple work items to overlap executions even on
the same CPU; however, a per-cpu workqueue doesn't have any CPU
locality or global ordering guarantees unless the target CPU is
explicitly specified and the increase of local concurrency shouldn't
make any difference.

In this case, there is only a single work item, increase of concurrency
level by switching to system_wq should not make any difference.

Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
---
 drivers/xen/xenbus/xenbus_probe_frontend.c | 15 +--
 1 file changed, 1 insertion(+), 14 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c 
b/drivers/xen/xenbus/xenbus_probe_frontend.c
index bcb53bd..611a231 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -31,7 +31,6 @@
 #include "xenbus_probe.h"


-static struct workqueue_struct *xenbus_frontend_wq;

 /* device// => - */
 static int frontend_bus_id(char bus_id[XEN_BUS_ID_SIZE], const char *nodename)
@@ -109,13 +108,7 @@ static int xenbus_frontend_dev_resume(struct device *dev)
if (xen_store_domain_type == XS_LOCAL) {
struct xenbus_device *xdev = to_xenbus_device(dev);

-   if (!xenbus_frontend_wq) {
-   pr_err("%s: no workqueue to process delayed resume\n",
-  xdev->nodename);
-   return -EFAULT;
-   }
-
-   queue_work(xenbus_frontend_wq, >work);
+   schedule_work(>work);

return 0;
}
@@ -485,12 +478,6 @@ static int __init xenbus_probe_frontend_init(void)

register_xenstore_notifier(_notifier);

-   if (xen_store_domain_type == XS_LOCAL) {
-   xenbus_frontend_wq = create_workqueue("xenbus_frontend");
-   if (!xenbus_frontend_wq)
-   pr_warn("create xenbus frontend workqueue failed, S3 
resume is likely to fail\n");
-   }
-
return 0;
 }
 subsys_initcall(xenbus_probe_frontend_init);
--
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen: xenbus: Remove create_workqueue

2016-05-31 Thread Bhaktipriya Shridhar
Sorry about that. Will make the corrections in v2.

Thanks,
Bhaktipriya


On Tue, May 31, 2016 at 9:48 PM, David Vrabel <david.vra...@citrix.com> wrote:
> On 27/05/16 19:50, Bhaktipriya Shridhar wrote:
>> With concurrency managed workqueues, use of dedicated workqueues can be
>> replaced by using system_wq. Drop xenbus_frontend_wq by using system_wq.
>>
>> Since there is only a single work item, increase of concurrency level by
>> switching to system_wq should not break anything.
>>
>> Since the work item could be pending and the code expects it to run
>> once scheduled, flush_work() has been used in xenbus_dev_suspend()
>
> This says flush_work() but...
>>
>> Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
>> ---
>>  drivers/xen/xenbus/xenbus_probe.c  |  2 ++
>>  drivers/xen/xenbus/xenbus_probe_frontend.c | 15 +--
>>  2 files changed, 3 insertions(+), 14 deletions(-)
>>
>> diff --git a/drivers/xen/xenbus/xenbus_probe.c 
>> b/drivers/xen/xenbus/xenbus_probe.c
>> index 33a31cf..bc97019 100644
>> --- a/drivers/xen/xenbus/xenbus_probe.c
>> +++ b/drivers/xen/xenbus/xenbus_probe.c
>> @@ -592,6 +592,8 @@ int xenbus_dev_suspend(struct device *dev)
>>
>>   DPRINTK("%s", xdev->nodename);
>>
>> + cancel_work_sync(>work);
>
> ...cancel_work_sync() is called here.
>
> David

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen: xenbus: Remove create_workqueue

2016-05-27 Thread Bhaktipriya Shridhar
With concurrency managed workqueues, use of dedicated workqueues can be
replaced by using system_wq. Drop xenbus_frontend_wq by using system_wq.

Since there is only a single work item, increase of concurrency level by
switching to system_wq should not break anything.

Since the work item could be pending and the code expects it to run
once scheduled, flush_work() has been used in xenbus_dev_suspend()

Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
---
 drivers/xen/xenbus/xenbus_probe.c  |  2 ++
 drivers/xen/xenbus/xenbus_probe_frontend.c | 15 +--
 2 files changed, 3 insertions(+), 14 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_probe.c 
b/drivers/xen/xenbus/xenbus_probe.c
index 33a31cf..bc97019 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -592,6 +592,8 @@ int xenbus_dev_suspend(struct device *dev)

DPRINTK("%s", xdev->nodename);

+   cancel_work_sync(>work);
+
if (dev->driver == NULL)
return 0;
drv = to_xenbus_driver(dev->driver);
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c 
b/drivers/xen/xenbus/xenbus_probe_frontend.c
index bcb53bd..611a231 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -31,7 +31,6 @@
 #include "xenbus_probe.h"


-static struct workqueue_struct *xenbus_frontend_wq;

 /* device// => - */
 static int frontend_bus_id(char bus_id[XEN_BUS_ID_SIZE], const char *nodename)
@@ -109,13 +108,7 @@ static int xenbus_frontend_dev_resume(struct device *dev)
if (xen_store_domain_type == XS_LOCAL) {
struct xenbus_device *xdev = to_xenbus_device(dev);

-   if (!xenbus_frontend_wq) {
-   pr_err("%s: no workqueue to process delayed resume\n",
-  xdev->nodename);
-   return -EFAULT;
-   }
-
-   queue_work(xenbus_frontend_wq, >work);
+   schedule_work(>work);

return 0;
}
@@ -485,12 +478,6 @@ static int __init xenbus_probe_frontend_init(void)

register_xenstore_notifier(_notifier);

-   if (xen_store_domain_type == XS_LOCAL) {
-   xenbus_frontend_wq = create_workqueue("xenbus_frontend");
-   if (!xenbus_frontend_wq)
-   pr_warn("create xenbus frontend workqueue failed, S3 
resume is likely to fail\n");
-   }
-
return 0;
 }
 subsys_initcall(xenbus_probe_frontend_init);
--
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen: xen-pciback: Remove create_workqueue

2016-05-27 Thread Bhaktipriya Shridhar
With concurrency managed workqueues, use of dedicated workqueues can be
replaced by using system_wq. Drop host->intr_wq by using
system_wq.

Since there is only a single work item, increase of concurrency level by
switching to system_wq should not break anything.

cancel_work_sync() has been used in xen_pcibk_disconnect() to ensure that
work item is not pending or executing by the time exit path runs.

Signed-off-by: Bhaktipriya Shridhar <bhaktipriy...@gmail.com>
---
 drivers/xen/xen-pciback/pciback.h |  1 -
 drivers/xen/xen-pciback/pciback_ops.c |  2 +-
 drivers/xen/xen-pciback/xenbus.c  | 10 +-
 3 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/xen-pciback/pciback.h 
b/drivers/xen/xen-pciback/pciback.h
index 4d529f3..7af369b6 100644
--- a/drivers/xen/xen-pciback/pciback.h
+++ b/drivers/xen/xen-pciback/pciback.h
@@ -55,7 +55,6 @@ struct xen_pcibk_dev_data {

 /* Used by XenBus and xen_pcibk_ops.c */
 extern wait_queue_head_t xen_pcibk_aer_wait_queue;
-extern struct workqueue_struct *xen_pcibk_wq;
 /* Used by pcistub.c and conf_space_quirks.c */
 extern struct list_head xen_pcibk_quirks;

diff --git a/drivers/xen/xen-pciback/pciback_ops.c 
b/drivers/xen/xen-pciback/pciback_ops.c
index 2f19dd7..f8c7775 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -310,7 +310,7 @@ void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device 
*pdev)
 * already processing a request */
if (test_bit(_XEN_PCIF_active, (unsigned long *)>sh_info->flags)
&& !test_and_set_bit(_PDEVF_op_active, >flags)) {
-   queue_work(xen_pcibk_wq, >op_work);
+   schedule_work(>op_work);
}
/*_XEN_PCIB_active should have been cleared by pcifront. And also make
sure xen_pcibk is waiting for ack by checking _PCIB_op_pending*/
diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index c252eb3..f70a8e1 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -17,7 +17,6 @@
 #include "pciback.h"

 #define INVALID_EVTCHN_IRQ  (-1)
-struct workqueue_struct *xen_pcibk_wq;

 static bool __read_mostly passthrough;
 module_param(passthrough, bool, S_IRUGO);
@@ -76,8 +75,7 @@ static void xen_pcibk_disconnect(struct xen_pcibk_device 
*pdev)
/* If the driver domain started an op, make sure we complete it
 * before releasing the shared memory */

-   /* Note, the workqueue does not use spinlocks at all.*/
-   flush_workqueue(xen_pcibk_wq);
+   cancel_work_sync(>op_work);

if (pdev->sh_info != NULL) {
xenbus_unmap_ring_vfree(pdev->xdev, pdev->sh_info);
@@ -733,11 +731,6 @@ const struct xen_pcibk_backend *__read_mostly 
xen_pcibk_backend;

 int __init xen_pcibk_xenbus_register(void)
 {
-   xen_pcibk_wq = create_workqueue("xen_pciback_workqueue");
-   if (!xen_pcibk_wq) {
-   pr_err("%s: create xen_pciback_workqueue failed\n", __func__);
-   return -EFAULT;
-   }
xen_pcibk_backend = _pcibk_vpci_backend;
if (passthrough)
xen_pcibk_backend = _pcibk_passthrough_backend;
@@ -747,6 +740,5 @@ int __init xen_pcibk_xenbus_register(void)

 void __exit xen_pcibk_xenbus_unregister(void)
 {
-   destroy_workqueue(xen_pcibk_wq);
xenbus_unregister_driver(_pcibk_driver);
 }
--
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel