[PATCH] PCI: Add disabling pm async quirk for JMicron chips

2014-12-04 Thread Chuansheng Liu
Some history from
commit e6b7e41cdd8c ("ata: Disabling the async PM for JMicron chip 363/361")
==
Since v3.15, the PM feature of async noirq
commit 76569faa62c4 ("PM / sleep: Asynchronous threads for resume_noirq") is 
introduced.

Then Jay hit one system resuming issue that one of the JMicron controller
can not be powered up successfully.

His device tree is like below:
 +-1c.4-[02]--+-00.0  JMicron Technology Corp. JMB363 SATA/IDE 
Controller
 |\-00.1  JMicron Technology Corp. JMB363 SATA/IDE 
Controller

After investigation, we found the the Micron chip 363 included
one SATA controller(:02:00.0) and one PATA controller(:02:00.1),
these two controllers do not have parent-children relationship,
but the PATA controller only can be powered on after the SATA controller
has finished the powering on.

If we enabled the async noirq(), then the below error is hit during noirq
phase:
pata_jmicron :02:00.1: Refused to change power state, currently in D3
Here for JMicron chip 363/361, we need forcedly to disable the async method.

Bug detail: https://bugzilla.kernel.org/show_bug.cgi?id=81551
==

After that, Barto reported the same issue, but his Jmicron chip is JMB368,
so it can not be covered by
commit e6b7e41cdd8c ("ata: Disabling the async PM for JMicron chip 363/361").

Bug link:
https://bugzilla.kernel.org/show_bug.cgi?id=84861

Here we think Jmicron chips have the same issue as describled in
commit e6b7e41cdd8c ("ata: Disabling the async PM for JMicron chip 363/361"),

so here add one quirk to disable the JMicron chips' PM async feature.

Cc: sta...@vger.kernel.org # 3.15+
Signed-off-by: Chuansheng Liu 
---
 drivers/pci/quirks.c |   17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 90acb32..1963080 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1501,6 +1501,21 @@ static void quirk_jmicron_ata(struct pci_dev *pdev)
pci_read_config_dword(pdev, PCI_CLASS_REVISION, );
pdev->class = class >> 8;
 }
+
+/*
+ * For JMicron chips, we need to disable the async_suspend method, otherwise
+ * they will hit the power-on issue when doing device resume, add one quirk
+ * solution to disable the async_suspend method.
+ */
+static void pci_async_suspend_fixup(struct pci_dev *pdev)
+{
+   /*
+* disabling the async_suspend method for JMicron chips to
+* avoid device resuming issue.
+*/
+   device_disable_async_suspend(>dev);
+}
+
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB360, 
quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, 
quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB362, 
quirk_jmicron_ata);
@@ -1519,6 +1534,8 @@ DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB3
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB366, quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB368, quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB369, quirk_jmicron_ata);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_JMICRON, PCI_ANY_ID,
+   pci_async_suspend_fixup);
 
 #endif
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] PCI: Add disabling pm async quirk for JMicron chips

2014-12-04 Thread Chuansheng Liu
Some history from
commit e6b7e41cdd8c (ata: Disabling the async PM for JMicron chip 363/361)
==
Since v3.15, the PM feature of async noirq
commit 76569faa62c4 (PM / sleep: Asynchronous threads for resume_noirq) is 
introduced.

Then Jay hit one system resuming issue that one of the JMicron controller
can not be powered up successfully.

His device tree is like below:
 +-1c.4-[02]--+-00.0  JMicron Technology Corp. JMB363 SATA/IDE 
Controller
 |\-00.1  JMicron Technology Corp. JMB363 SATA/IDE 
Controller

After investigation, we found the the Micron chip 363 included
one SATA controller(:02:00.0) and one PATA controller(:02:00.1),
these two controllers do not have parent-children relationship,
but the PATA controller only can be powered on after the SATA controller
has finished the powering on.

If we enabled the async noirq(), then the below error is hit during noirq
phase:
pata_jmicron :02:00.1: Refused to change power state, currently in D3
Here for JMicron chip 363/361, we need forcedly to disable the async method.

Bug detail: https://bugzilla.kernel.org/show_bug.cgi?id=81551
==

After that, Barto reported the same issue, but his Jmicron chip is JMB368,
so it can not be covered by
commit e6b7e41cdd8c (ata: Disabling the async PM for JMicron chip 363/361).

Bug link:
https://bugzilla.kernel.org/show_bug.cgi?id=84861

Here we think Jmicron chips have the same issue as describled in
commit e6b7e41cdd8c (ata: Disabling the async PM for JMicron chip 363/361),

so here add one quirk to disable the JMicron chips' PM async feature.

Cc: sta...@vger.kernel.org # 3.15+
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/pci/quirks.c |   17 +
 1 file changed, 17 insertions(+)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 90acb32..1963080 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1501,6 +1501,21 @@ static void quirk_jmicron_ata(struct pci_dev *pdev)
pci_read_config_dword(pdev, PCI_CLASS_REVISION, class);
pdev-class = class  8;
 }
+
+/*
+ * For JMicron chips, we need to disable the async_suspend method, otherwise
+ * they will hit the power-on issue when doing device resume, add one quirk
+ * solution to disable the async_suspend method.
+ */
+static void pci_async_suspend_fixup(struct pci_dev *pdev)
+{
+   /*
+* disabling the async_suspend method for JMicron chips to
+* avoid device resuming issue.
+*/
+   device_disable_async_suspend(pdev-dev);
+}
+
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB360, 
quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB361, 
quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB362, 
quirk_jmicron_ata);
@@ -1519,6 +1534,8 @@ DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB3
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB366, quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB368, quirk_jmicron_ata);
 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_JMICRON, 
PCI_DEVICE_ID_JMICRON_JMB369, quirk_jmicron_ata);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_JMICRON, PCI_ANY_ID,
+   pci_async_suspend_fixup);
 
 #endif
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] PCI: Do not enable async suspend for JMicron chips

2014-11-04 Thread Chuansheng Liu
The JMicron chip 361/363/368 contains one SATA controller and
one PATA controller, they are brother-relation ship in PCI tree,
but for powering on these both controller, we must follow the
sequence one by one, otherwise one of them can not be powered on
successfully.

So here we disable the async suspend method for Jmicron chip.

Bug link:
https://bugzilla.kernel.org/show_bug.cgi?id=81551
https://bugzilla.kernel.org/show_bug.cgi?id=84861

And we can revert the below commit after this patch is applied:
e6b7e41(ata: Disabling the async PM for JMicron chip 363/361)

Cc: sta...@vger.kernel.org # 3.15+
Acked-by: Aaron Lu 
Signed-off-by: Chuansheng Liu 
---
 drivers/pci/pci.c |   12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 625a4ac..53128f0 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2046,7 +2046,17 @@ void pci_pm_init(struct pci_dev *dev)
pm_runtime_forbid(>dev);
pm_runtime_set_active(>dev);
pm_runtime_enable(>dev);
-   device_enable_async_suspend(>dev);
+
+   /*
+* The JMicron chip 361/363/368 contains one SATA controller and
+* one PATA controller, they are brother-relation ship in PCI tree,
+* but for powering on these both controller, we must follow the
+* sequence one by one, otherwise one of them can not be powered on
+* successfully, so here we disable the async suspend method for
+* Jmicron chip.
+*/
+   if (dev->vendor != PCI_VENDOR_ID_JMICRON)
+   device_enable_async_suspend(>dev);
dev->wakeup_prepared = false;
 
dev->pm_cap = 0;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] PCI: Do not enable async suspend for JMicron chips

2014-11-04 Thread Chuansheng Liu
The JMicron chip 361/363/368 contains one SATA controller and
one PATA controller, they are brother-relation ship in PCI tree,
but for powering on these both controller, we must follow the
sequence one by one, otherwise one of them can not be powered on
successfully.

So here we disable the async suspend method for Jmicron chip.

Cc: sta...@vger.kernel.org # 3.15+
Signed-off-by: Chuansheng Liu 
---
 drivers/pci/pci.c |   12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 625a4ac..53128f0 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2046,7 +2046,17 @@ void pci_pm_init(struct pci_dev *dev)
pm_runtime_forbid(>dev);
pm_runtime_set_active(>dev);
pm_runtime_enable(>dev);
-   device_enable_async_suspend(>dev);
+
+   /*
+* The JMicron chip 361/363/368 contains one SATA controller and
+* one PATA controller, they are brother-relation ship in PCI tree,
+* but for powering on these both controller, we must follow the
+* sequence one by one, otherwise one of them can not be powered on
+* successfully, so here we disable the async suspend method for
+* Jmicron chip.
+*/
+   if (dev->vendor != PCI_VENDOR_ID_JMICRON)
+   device_enable_async_suspend(>dev);
dev->wakeup_prepared = false;
 
dev->pm_cap = 0;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] PCI: Do not enable async suspend for JMicron chips

2014-11-04 Thread Chuansheng Liu
The JMicron chip 361/363/368 contains one SATA controller and
one PATA controller, they are brother-relation ship in PCI tree,
but for powering on these both controller, we must follow the
sequence one by one, otherwise one of them can not be powered on
successfully.

So here we disable the async suspend method for Jmicron chip.

Cc: sta...@vger.kernel.org # 3.15+
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/pci/pci.c |   12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 625a4ac..53128f0 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2046,7 +2046,17 @@ void pci_pm_init(struct pci_dev *dev)
pm_runtime_forbid(dev-dev);
pm_runtime_set_active(dev-dev);
pm_runtime_enable(dev-dev);
-   device_enable_async_suspend(dev-dev);
+
+   /*
+* The JMicron chip 361/363/368 contains one SATA controller and
+* one PATA controller, they are brother-relation ship in PCI tree,
+* but for powering on these both controller, we must follow the
+* sequence one by one, otherwise one of them can not be powered on
+* successfully, so here we disable the async suspend method for
+* Jmicron chip.
+*/
+   if (dev-vendor != PCI_VENDOR_ID_JMICRON)
+   device_enable_async_suspend(dev-dev);
dev-wakeup_prepared = false;
 
dev-pm_cap = 0;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] PCI: Do not enable async suspend for JMicron chips

2014-11-04 Thread Chuansheng Liu
The JMicron chip 361/363/368 contains one SATA controller and
one PATA controller, they are brother-relation ship in PCI tree,
but for powering on these both controller, we must follow the
sequence one by one, otherwise one of them can not be powered on
successfully.

So here we disable the async suspend method for Jmicron chip.

Bug link:
https://bugzilla.kernel.org/show_bug.cgi?id=81551
https://bugzilla.kernel.org/show_bug.cgi?id=84861

And we can revert the below commit after this patch is applied:
e6b7e41(ata: Disabling the async PM for JMicron chip 363/361)

Cc: sta...@vger.kernel.org # 3.15+
Acked-by: Aaron Lu aaron...@intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/pci/pci.c |   12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 625a4ac..53128f0 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -2046,7 +2046,17 @@ void pci_pm_init(struct pci_dev *dev)
pm_runtime_forbid(dev-dev);
pm_runtime_set_active(dev-dev);
pm_runtime_enable(dev-dev);
-   device_enable_async_suspend(dev-dev);
+
+   /*
+* The JMicron chip 361/363/368 contains one SATA controller and
+* one PATA controller, they are brother-relation ship in PCI tree,
+* but for powering on these both controller, we must follow the
+* sequence one by one, otherwise one of them can not be powered on
+* successfully, so here we disable the async suspend method for
+* Jmicron chip.
+*/
+   if (dev-vendor != PCI_VENDOR_ID_JMICRON)
+   device_enable_async_suspend(dev-dev);
dev-wakeup_prepared = false;
 
dev-pm_cap = 0;
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V2] ata: Disabling the async PM for JMicron chips

2014-09-24 Thread Chuansheng Liu
Like the commit e6b7e41cdd8c ("ata: Disabling the async PM for JMicron chip 
363/361"),
Barto found the similar issue for JMicron chip 368, that 363/368 has no
parent-children relationship, but they have the power dependency.

So here we can exclude the JMicron chips out of pm_async method directly,
to avoid further similar issues.

Details in:
https://bugzilla.kernel.org/show_bug.cgi?id=84861

Reported-and-tested-by: Barto 
Signed-off-by: Chuansheng Liu 
---
 drivers/ata/ahci.c |   11 +--
 drivers/ata/pata_jmicron.c |   11 +--
 2 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a0cc0ed..85aa6ec 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1340,15 +1340,14 @@ static int ahci_init_one(struct pci_dev *pdev, const 
struct pci_device_id *ent)
ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
 
/*
-* The JMicron chip 361/363 contains one SATA controller and one
+* The JMicron chip 361/363/368 contains one SATA controller and one
 * PATA controller,for powering on these both controllers, we must
 * follow the sequence one by one, otherwise one of them can not be
-* powered on successfully, so here we disable the async suspend
-* method for these chips.
+* powered on successfully.
+* Here we can exclude the Jmicron family directly out of pm_async
+* method to follow the power-on sequence.
 */
-   if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
-   (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev->vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(>dev);
 
/* acquire resources */
diff --git a/drivers/ata/pata_jmicron.c b/drivers/ata/pata_jmicron.c
index 47e418b..1d685b6 100644
--- a/drivers/ata/pata_jmicron.c
+++ b/drivers/ata/pata_jmicron.c
@@ -144,15 +144,14 @@ static int jmicron_init_one (struct pci_dev *pdev, const 
struct pci_device_id *i
const struct ata_port_info *ppi[] = { , NULL };
 
/*
-* The JMicron chip 361/363 contains one SATA controller and one
+* The JMicron chip 361/363/368 contains one SATA controller and one
 * PATA controller,for powering on these both controllers, we must
 * follow the sequence one by one, otherwise one of them can not be
-* powered on successfully, so here we disable the async suspend
-* method for these chips.
+* powered on successfully.
+* Here we can exclude the Jmicron family directly out of pm_async
+* method to follow the power-on sequence.
 */
-   if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
-   (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev->vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(>dev);
 
return ata_pci_bmdma_init_one(pdev, ppi, _sht, NULL, 0);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V2] ata: Disabling the async PM for JMicron chips

2014-09-24 Thread Chuansheng Liu
Like the commit e6b7e41cdd8c (ata: Disabling the async PM for JMicron chip 
363/361),
Barto found the similar issue for JMicron chip 368, that 363/368 has no
parent-children relationship, but they have the power dependency.

So here we can exclude the JMicron chips out of pm_async method directly,
to avoid further similar issues.

Details in:
https://bugzilla.kernel.org/show_bug.cgi?id=84861

Reported-and-tested-by: Barto mister.free...@laposte.net
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/ata/ahci.c |   11 +--
 drivers/ata/pata_jmicron.c |   11 +--
 2 files changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a0cc0ed..85aa6ec 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1340,15 +1340,14 @@ static int ahci_init_one(struct pci_dev *pdev, const 
struct pci_device_id *ent)
ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
 
/*
-* The JMicron chip 361/363 contains one SATA controller and one
+* The JMicron chip 361/363/368 contains one SATA controller and one
 * PATA controller,for powering on these both controllers, we must
 * follow the sequence one by one, otherwise one of them can not be
-* powered on successfully, so here we disable the async suspend
-* method for these chips.
+* powered on successfully.
+* Here we can exclude the Jmicron family directly out of pm_async
+* method to follow the power-on sequence.
 */
-   if (pdev-vendor == PCI_VENDOR_ID_JMICRON 
-   (pdev-device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev-device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev-vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(pdev-dev);
 
/* acquire resources */
diff --git a/drivers/ata/pata_jmicron.c b/drivers/ata/pata_jmicron.c
index 47e418b..1d685b6 100644
--- a/drivers/ata/pata_jmicron.c
+++ b/drivers/ata/pata_jmicron.c
@@ -144,15 +144,14 @@ static int jmicron_init_one (struct pci_dev *pdev, const 
struct pci_device_id *i
const struct ata_port_info *ppi[] = { info, NULL };
 
/*
-* The JMicron chip 361/363 contains one SATA controller and one
+* The JMicron chip 361/363/368 contains one SATA controller and one
 * PATA controller,for powering on these both controllers, we must
 * follow the sequence one by one, otherwise one of them can not be
-* powered on successfully, so here we disable the async suspend
-* method for these chips.
+* powered on successfully.
+* Here we can exclude the Jmicron family directly out of pm_async
+* method to follow the power-on sequence.
 */
-   if (pdev-vendor == PCI_VENDOR_ID_JMICRON 
-   (pdev-device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev-device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev-vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(pdev-dev);
 
return ata_pci_bmdma_init_one(pdev, ppi, jmicron_sht, NULL, 0);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] ata: Disabling the async PM for JMicron chips

2014-09-22 Thread Chuansheng Liu
Be similar with commit (ata: Disabling the async PM for JMicron chip 363/361),
Barto found the similar issue for JMicron chip 368, that 363/368 has no
parent-children relationship, but they have the power dependency.

So here we can exclude the JMicron chips out of pm_async method directly,
to avoid further similar issues.

Details in:
https://bugzilla.kernel.org/show_bug.cgi?id=84861

Reported-and-tested-by: Barto 
Signed-off-by: Chuansheng Liu 
---
 drivers/ata/ahci.c |6 +++---
 drivers/ata/pata_jmicron.c |6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a0cc0ed..c096d49 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1345,10 +1345,10 @@ static int ahci_init_one(struct pci_dev *pdev, const 
struct pci_device_id *ent)
 * follow the sequence one by one, otherwise one of them can not be
 * powered on successfully, so here we disable the async suspend
 * method for these chips.
+* Jmicron chip 368 has been found has the similar issue, here we can
+* exclude the Jmicron family directly to avoid other similar issues.
 */
-   if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
-   (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev->vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(>dev);
 
/* acquire resources */
diff --git a/drivers/ata/pata_jmicron.c b/drivers/ata/pata_jmicron.c
index 47e418b..48c993b 100644
--- a/drivers/ata/pata_jmicron.c
+++ b/drivers/ata/pata_jmicron.c
@@ -149,10 +149,10 @@ static int jmicron_init_one (struct pci_dev *pdev, const 
struct pci_device_id *i
 * follow the sequence one by one, otherwise one of them can not be
 * powered on successfully, so here we disable the async suspend
 * method for these chips.
+* Jmicron chip 368 has been found has the similar issue, here we can
+* exclude the Jmicron family directly to avoid other similar issues.
 */
-   if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
-   (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev->vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(>dev);
 
return ata_pci_bmdma_init_one(pdev, ppi, _sht, NULL, 0);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] ata: Disabling the async PM for JMicron chips

2014-09-22 Thread Chuansheng Liu
Be similar with commit (ata: Disabling the async PM for JMicron chip 363/361),
Barto found the similar issue for JMicron chip 368, that 363/368 has no
parent-children relationship, but they have the power dependency.

So here we can exclude the JMicron chips out of pm_async method directly,
to avoid further similar issues.

Details in:
https://bugzilla.kernel.org/show_bug.cgi?id=84861

Reported-and-tested-by: Barto mister.free...@laposte.net
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/ata/ahci.c |6 +++---
 drivers/ata/pata_jmicron.c |6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a0cc0ed..c096d49 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1345,10 +1345,10 @@ static int ahci_init_one(struct pci_dev *pdev, const 
struct pci_device_id *ent)
 * follow the sequence one by one, otherwise one of them can not be
 * powered on successfully, so here we disable the async suspend
 * method for these chips.
+* Jmicron chip 368 has been found has the similar issue, here we can
+* exclude the Jmicron family directly to avoid other similar issues.
 */
-   if (pdev-vendor == PCI_VENDOR_ID_JMICRON 
-   (pdev-device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev-device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev-vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(pdev-dev);
 
/* acquire resources */
diff --git a/drivers/ata/pata_jmicron.c b/drivers/ata/pata_jmicron.c
index 47e418b..48c993b 100644
--- a/drivers/ata/pata_jmicron.c
+++ b/drivers/ata/pata_jmicron.c
@@ -149,10 +149,10 @@ static int jmicron_init_one (struct pci_dev *pdev, const 
struct pci_device_id *i
 * follow the sequence one by one, otherwise one of them can not be
 * powered on successfully, so here we disable the async suspend
 * method for these chips.
+* Jmicron chip 368 has been found has the similar issue, here we can
+* exclude the Jmicron family directly to avoid other similar issues.
 */
-   if (pdev-vendor == PCI_VENDOR_ID_JMICRON 
-   (pdev-device == PCI_DEVICE_ID_JMICRON_JMB363 ||
-   pdev-device == PCI_DEVICE_ID_JMICRON_JMB361))
+   if (pdev-vendor == PCI_VENDOR_ID_JMICRON)
device_disable_async_suspend(pdev-dev);
 
return ata_pci_bmdma_init_one(pdev, ppi, jmicron_sht, NULL, 0);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] cpuidle: Use wake_up_all_idle_cpus() to wake up all idle cpus

2014-09-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  2ed903c5485bad0eafdd3d59ff993598736e4f31
Gitweb: http://git.kernel.org/tip/2ed903c5485bad0eafdd3d59ff993598736e4f31
Author: Chuansheng Liu 
AuthorDate: Thu, 4 Sep 2014 15:17:55 +0800
Committer:  Ingo Molnar 
CommitDate: Fri, 19 Sep 2014 12:35:16 +0200

cpuidle: Use wake_up_all_idle_cpus() to wake up all idle cpus

Currently kick_all_cpus_sync() or smp_call_function() can not
break the polling idle cpu immediately.

Instead using wake_up_all_idle_cpus() which can wake up the polling idle
cpu quickly is much more helpful for power.

Signed-off-by: Chuansheng Liu 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: linux...@vger.kernel.org
Cc: changcheng@intel.com
Cc: xiaoming.w...@intel.com
Cc: souvik.k.chakrava...@intel.com
Cc: l...@amacapital.net
Cc: Daniel Lezcano 
Cc: Linus Torvalds 
Cc: Rafael J. Wysocki 
Cc: linux...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1409815075-4180-3-git-send-email-chuansheng@intel.com
Signed-off-by: Ingo Molnar 
---
 drivers/cpuidle/cpuidle.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..d31e04c 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -223,7 +223,7 @@ void cpuidle_uninstall_idle_handler(void)
 {
if (enabled_devices) {
initialized = 0;
-   kick_all_cpus_sync();
+   wake_up_all_idle_cpus();
}
 }
 
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   wake_up_all_idle_cpus();
return NOTIFY_OK;
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] smp: Add new wake_up_all_idle_cpus() function

2014-09-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  c6f4459fc3ba532e896cb678e29b45cb985f82bf
Gitweb: http://git.kernel.org/tip/c6f4459fc3ba532e896cb678e29b45cb985f82bf
Author: Chuansheng Liu 
AuthorDate: Thu, 4 Sep 2014 15:17:54 +0800
Committer:  Ingo Molnar 
CommitDate: Fri, 19 Sep 2014 12:35:15 +0200

smp: Add new wake_up_all_idle_cpus() function

Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.

But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.

Here adding one new function wake_up_all_idle_cpus() to let all cpus
out of idle based on function wake_up_if_idle().

Signed-off-by: Chuansheng Liu 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: daniel.lezc...@linaro.org
Cc: r...@rjwysocki.net
Cc: linux...@vger.kernel.org
Cc: changcheng@intel.com
Cc: xiaoming.w...@intel.com
Cc: souvik.k.chakrava...@intel.com
Cc: l...@amacapital.net
Cc: Andrew Morton 
Cc: Christoph Hellwig 
Cc: Frederic Weisbecker 
Cc: Geert Uytterhoeven 
Cc: Jan Kara 
Cc: Jens Axboe 
Cc: Jens Axboe 
Cc: Linus Torvalds 
Cc: Michal Hocko 
Cc: Paul Gortmaker 
Cc: Roman Gushchin 
Cc: Srivatsa S. Bhat 
Link: 
http://lkml.kernel.org/r/1409815075-4180-2-git-send-email-chuansheng@intel.com
Signed-off-by: Ingo Molnar 
---
 include/linux/smp.h |  2 ++
 kernel/smp.c| 22 ++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 34347f2..93dff5f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -100,6 +100,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void wake_up_all_idle_cpus(void);
 
 /*
  * Generic and arch helpers
@@ -148,6 +149,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void wake_up_all_idle_cpus(void) {  }
 
 #endif /* !SMP */
 
diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..9e0d0b2 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "smpboot.h"
 
@@ -699,3 +700,24 @@ void kick_all_cpus_sync(void)
smp_call_function(do_nothing, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
+
+/**
+ * wake_up_all_idle_cpus - break all cpus out of idle
+ * wake_up_all_idle_cpus try to break all cpus which is in idle state even
+ * including idle polling cpus, for non-idle cpus, we will do nothing
+ * for them.
+ */
+void wake_up_all_idle_cpus(void)
+{
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-09-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  f6be8af1c95de4a46e325e728900a70ceadb52cf
Gitweb: http://git.kernel.org/tip/f6be8af1c95de4a46e325e728900a70ceadb52cf
Author: Chuansheng Liu 
AuthorDate: Thu, 4 Sep 2014 15:17:53 +0800
Committer:  Ingo Molnar 
CommitDate: Fri, 19 Sep 2014 12:35:14 +0200

sched: Add new API wake_up_if_idle() to wake up the idle cpu

Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski 
Signed-off-by: Chuansheng Liu 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: daniel.lezc...@linaro.org
Cc: r...@rjwysocki.net
Cc: linux...@vger.kernel.org
Cc: changcheng@intel.com
Cc: xiaoming.w...@intel.com
Cc: souvik.k.chakrava...@intel.com
Cc: chuansheng@intel.com
Cc: Linus Torvalds 
Link: 
http://lkml.kernel.org/r/1409815075-4180-1-git-send-email-chuansheng@intel.com
Signed-off-by: Ingo Molnar 
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 19 +++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index dd9eb48..82ff3d6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78e5c83..f7c6ed2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1634,6 +1634,25 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (!is_idle_task(rq->curr))
+   return;
+
+   if (set_nr_if_polling(rq->idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(>lock, flags);
+   if (is_idle_task(rq->curr))
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(>lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-09-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  f6be8af1c95de4a46e325e728900a70ceadb52cf
Gitweb: http://git.kernel.org/tip/f6be8af1c95de4a46e325e728900a70ceadb52cf
Author: Chuansheng Liu chuansheng@intel.com
AuthorDate: Thu, 4 Sep 2014 15:17:53 +0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Fri, 19 Sep 2014 12:35:14 +0200

sched: Add new API wake_up_if_idle() to wake up the idle cpu

Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski l...@amacapital.net
Signed-off-by: Chuansheng Liu chuansheng@intel.com
Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org
Cc: daniel.lezc...@linaro.org
Cc: r...@rjwysocki.net
Cc: linux...@vger.kernel.org
Cc: changcheng@intel.com
Cc: xiaoming.w...@intel.com
Cc: souvik.k.chakrava...@intel.com
Cc: chuansheng@intel.com
Cc: Linus Torvalds torva...@linux-foundation.org
Link: 
http://lkml.kernel.org/r/1409815075-4180-1-git-send-email-chuansheng@intel.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 19 +++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index dd9eb48..82ff3d6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78e5c83..f7c6ed2 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1634,6 +1634,25 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (!is_idle_task(rq-curr))
+   return;
+
+   if (set_nr_if_polling(rq-idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(rq-lock, flags);
+   if (is_idle_task(rq-curr))
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(rq-lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] cpuidle: Use wake_up_all_idle_cpus() to wake up all idle cpus

2014-09-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  2ed903c5485bad0eafdd3d59ff993598736e4f31
Gitweb: http://git.kernel.org/tip/2ed903c5485bad0eafdd3d59ff993598736e4f31
Author: Chuansheng Liu chuansheng@intel.com
AuthorDate: Thu, 4 Sep 2014 15:17:55 +0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Fri, 19 Sep 2014 12:35:16 +0200

cpuidle: Use wake_up_all_idle_cpus() to wake up all idle cpus

Currently kick_all_cpus_sync() or smp_call_function() can not
break the polling idle cpu immediately.

Instead using wake_up_all_idle_cpus() which can wake up the polling idle
cpu quickly is much more helpful for power.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org
Cc: linux...@vger.kernel.org
Cc: changcheng@intel.com
Cc: xiaoming.w...@intel.com
Cc: souvik.k.chakrava...@intel.com
Cc: l...@amacapital.net
Cc: Daniel Lezcano daniel.lezc...@linaro.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Rafael J. Wysocki r...@rjwysocki.net
Cc: linux...@vger.kernel.org
Link: 
http://lkml.kernel.org/r/1409815075-4180-3-git-send-email-chuansheng@intel.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 drivers/cpuidle/cpuidle.c | 9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..d31e04c 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -223,7 +223,7 @@ void cpuidle_uninstall_idle_handler(void)
 {
if (enabled_devices) {
initialized = 0;
-   kick_all_cpus_sync();
+   wake_up_all_idle_cpus();
}
 }
 
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   wake_up_all_idle_cpus();
return NOTIFY_OK;
 }
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:sched/core] smp: Add new wake_up_all_idle_cpus() function

2014-09-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  c6f4459fc3ba532e896cb678e29b45cb985f82bf
Gitweb: http://git.kernel.org/tip/c6f4459fc3ba532e896cb678e29b45cb985f82bf
Author: Chuansheng Liu chuansheng@intel.com
AuthorDate: Thu, 4 Sep 2014 15:17:54 +0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Fri, 19 Sep 2014 12:35:15 +0200

smp: Add new wake_up_all_idle_cpus() function

Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.

But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.

Here adding one new function wake_up_all_idle_cpus() to let all cpus
out of idle based on function wake_up_if_idle().

Signed-off-by: Chuansheng Liu chuansheng@intel.com
Signed-off-by: Peter Zijlstra (Intel) pet...@infradead.org
Cc: daniel.lezc...@linaro.org
Cc: r...@rjwysocki.net
Cc: linux...@vger.kernel.org
Cc: changcheng@intel.com
Cc: xiaoming.w...@intel.com
Cc: souvik.k.chakrava...@intel.com
Cc: l...@amacapital.net
Cc: Andrew Morton a...@linux-foundation.org
Cc: Christoph Hellwig h...@infradead.org
Cc: Frederic Weisbecker fweis...@gmail.com
Cc: Geert Uytterhoeven geert+rene...@glider.be
Cc: Jan Kara j...@suse.cz
Cc: Jens Axboe ax...@fb.com
Cc: Jens Axboe ax...@kernel.dk
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Michal Hocko mho...@suse.cz
Cc: Paul Gortmaker paul.gortma...@windriver.com
Cc: Roman Gushchin kl...@yandex-team.ru
Cc: Srivatsa S. Bhat srivatsa.b...@linux.vnet.ibm.com
Link: 
http://lkml.kernel.org/r/1409815075-4180-2-git-send-email-chuansheng@intel.com
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 include/linux/smp.h |  2 ++
 kernel/smp.c| 22 ++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 34347f2..93dff5f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -100,6 +100,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void wake_up_all_idle_cpus(void);
 
 /*
  * Generic and arch helpers
@@ -148,6 +149,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void wake_up_all_idle_cpus(void) {  }
 
 #endif /* !SMP */
 
diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..9e0d0b2 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include linux/gfp.h
 #include linux/smp.h
 #include linux/cpu.h
+#include linux/sched.h
 
 #include smpboot.h
 
@@ -699,3 +700,24 @@ void kick_all_cpus_sync(void)
smp_call_function(do_nothing, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
+
+/**
+ * wake_up_all_idle_cpus - break all cpus out of idle
+ * wake_up_all_idle_cpus try to break all cpus which is in idle state even
+ * including idle polling cpus, for non-idle cpus, we will do nothing
+ * for them.
+ */
+void wake_up_all_idle_cpus(void)
+{
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus);
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] cpuidle: Using the wake_up_all_idle_cpus() to wake up all idle cpus

2014-09-04 Thread Chuansheng Liu
Currently kick_all_cpus_sync() or smp_call_function() can not
break the polling idle cpu immediately.

Here using wake_up_all_idle_cpus() which can wake up the polling idle
cpu quickly is much helpful for power.

Signed-off-by: Chuansheng Liu 
---
 drivers/cpuidle/cpuidle.c |9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..d31e04c 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -223,7 +223,7 @@ void cpuidle_uninstall_idle_handler(void)
 {
if (enabled_devices) {
initialized = 0;
-   kick_all_cpus_sync();
+   wake_up_all_idle_cpus();
}
 }
 
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   wake_up_all_idle_cpus();
return NOTIFY_OK;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-09-04 Thread Chuansheng Liu
Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski 
Signed-off-by: Chuansheng Liu 
---
 include/linux/sched.h |1 +
 kernel/sched/core.c   |   19 +++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 857ba40..3f89ac1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..b818548 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1620,6 +1620,25 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (!is_idle_task(rq->curr))
+   return;
+
+   if (set_nr_if_polling(rq->idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(>lock, flags);
+   if (is_idle_task(rq->curr))
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(>lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] smp: Adding new function wake_up_all_idle_cpus()

2014-09-04 Thread Chuansheng Liu
Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.

But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.

Here adding one new function wake_up_all_idle_cpus() to let all cpus
out of idle based on function wake_up_if_idle().

Signed-off-by: Chuansheng Liu 
---
 include/linux/smp.h |2 ++
 kernel/smp.c|   22 ++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 34347f2..93dff5f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -100,6 +100,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void wake_up_all_idle_cpus(void);
 
 /*
  * Generic and arch helpers
@@ -148,6 +149,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void wake_up_all_idle_cpus(void) {  }
 
 #endif /* !SMP */
 
diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..9e0d0b2 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "smpboot.h"
 
@@ -699,3 +700,24 @@ void kick_all_cpus_sync(void)
smp_call_function(do_nothing, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
+
+/**
+ * wake_up_all_idle_cpus - break all cpus out of idle
+ * wake_up_all_idle_cpus try to break all cpus which is in idle state even
+ * including idle polling cpus, for non-idle cpus, we will do nothing
+ * for them.
+ */
+void wake_up_all_idle_cpus(void)
+{
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] smp: Adding new function wake_up_all_idle_cpus()

2014-09-04 Thread Chuansheng Liu
Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.

But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.

Here adding one new function wake_up_all_idle_cpus() to let all cpus
out of idle based on function wake_up_if_idle().

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 include/linux/smp.h |2 ++
 kernel/smp.c|   22 ++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 34347f2..93dff5f 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -100,6 +100,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void wake_up_all_idle_cpus(void);
 
 /*
  * Generic and arch helpers
@@ -148,6 +149,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void wake_up_all_idle_cpus(void) {  }
 
 #endif /* !SMP */
 
diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..9e0d0b2 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include linux/gfp.h
 #include linux/smp.h
 #include linux/cpu.h
+#include linux/sched.h
 
 #include smpboot.h
 
@@ -699,3 +700,24 @@ void kick_all_cpus_sync(void)
smp_call_function(do_nothing, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
+
+/**
+ * wake_up_all_idle_cpus - break all cpus out of idle
+ * wake_up_all_idle_cpus try to break all cpus which is in idle state even
+ * including idle polling cpus, for non-idle cpus, we will do nothing
+ * for them.
+ */
+void wake_up_all_idle_cpus(void)
+{
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(wake_up_all_idle_cpus);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] cpuidle: Using the wake_up_all_idle_cpus() to wake up all idle cpus

2014-09-04 Thread Chuansheng Liu
Currently kick_all_cpus_sync() or smp_call_function() can not
break the polling idle cpu immediately.

Here using wake_up_all_idle_cpus() which can wake up the polling idle
cpu quickly is much helpful for power.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/cpuidle/cpuidle.c |9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..d31e04c 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -223,7 +223,7 @@ void cpuidle_uninstall_idle_handler(void)
 {
if (enabled_devices) {
initialized = 0;
-   kick_all_cpus_sync();
+   wake_up_all_idle_cpus();
}
 }
 
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   wake_up_all_idle_cpus();
return NOTIFY_OK;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-09-04 Thread Chuansheng Liu
Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski l...@amacapital.net
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 include/linux/sched.h |1 +
 kernel/sched/core.c   |   19 +++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 857ba40..3f89ac1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..b818548 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1620,6 +1620,25 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (!is_idle_task(rq-curr))
+   return;
+
+   if (set_nr_if_polling(rq-idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(rq-lock, flags);
+   if (is_idle_task(rq-curr))
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(rq-lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] ata: Disabling the async PM for JMicron chip 363/361

2014-08-31 Thread Chuansheng Liu
After enabled the PM feature that supporting async noirq(76569faa62
(PM / sleep: Asynchronous threads for resume_noirq)),
Jay hit the system resuming issue, that one of the JMicron controller
can not be powered up.

His device tree is like below:
 +-1c.4-[02]--+-00.0  JMicron Technology Corp. JMB363 SATA/IDE 
Controller
 |\-00.1  JMicron Technology Corp. JMB363 SATA/IDE 
Controller

After investigation, we found the the Micron chip 363 included
one SATA controller(:02:00.0) and one PATA controller(:02:00.1),
these two controllers do not have parent-children relationship,
but the PATA controller only can be powered on after the SATA controller
has finished the powering on.

If we enabled the async noirq(), then the below error is hit during noirq
phase:
pata_jmicron :02:00.1: Refused to change power state, currently in D3

Here for JMicron chip 363/361, we need forcedly to disable the async method.

Bug detail: https://bugzilla.kernel.org/show_bug.cgi?id=81551

Reported-by: Jay 
Signed-off-by: Chuansheng Liu 
---
 drivers/ata/ahci.c |   11 +++
 drivers/ata/pata_jmicron.c |   11 +++
 2 files changed, 22 insertions(+)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a29f801..f5634cd 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1329,6 +1329,17 @@ static int ahci_init_one(struct pci_dev *pdev, const 
struct pci_device_id *ent)
else if (pdev->vendor == 0x1c44 && pdev->device == 0x8000)
ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
 
+   /* The JMicron chip 361/363 contains one SATA controller and
+* one PATA controller,for powering on these both controllers,
+* we must follow the sequence one by one, otherwise one of them
+* can not be powered on successfully.
+* So here we disabled the async suspend method for these chips.
+   */
+   if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
+   (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
+   pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+   device_disable_async_suspend(>dev);
+
/* acquire resources */
rc = pcim_enable_device(pdev);
if (rc)
diff --git a/drivers/ata/pata_jmicron.c b/drivers/ata/pata_jmicron.c
index 4d1a5d2..6b7aa77 100644
--- a/drivers/ata/pata_jmicron.c
+++ b/drivers/ata/pata_jmicron.c
@@ -143,6 +143,17 @@ static int jmicron_init_one (struct pci_dev *pdev, const 
struct pci_device_id *i
};
const struct ata_port_info *ppi[] = { , NULL };
 
+   /* The JMicron chip 361/363 contains one SATA controller and
+* one PATA controller,for powering on these both controllers,
+* we must follow the sequence one by one, otherwise one of them
+* can not be powered on successfully.
+* So here we disabled the async suspend method for these chips.
+*/
+   if (pdev->vendor == PCI_VENDOR_ID_JMICRON &&
+   (pdev->device == PCI_DEVICE_ID_JMICRON_JMB363 ||
+   pdev->device == PCI_DEVICE_ID_JMICRON_JMB361))
+   device_disable_async_suspend(>dev);
+
return ata_pci_bmdma_init_one(pdev, ppi, _sht, NULL, 0);
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] ata: Disabling the async PM for JMicron chip 363/361

2014-08-31 Thread Chuansheng Liu
After enabled the PM feature that supporting async noirq(76569faa62
(PM / sleep: Asynchronous threads for resume_noirq)),
Jay hit the system resuming issue, that one of the JMicron controller
can not be powered up.

His device tree is like below:
 +-1c.4-[02]--+-00.0  JMicron Technology Corp. JMB363 SATA/IDE 
Controller
 |\-00.1  JMicron Technology Corp. JMB363 SATA/IDE 
Controller

After investigation, we found the the Micron chip 363 included
one SATA controller(:02:00.0) and one PATA controller(:02:00.1),
these two controllers do not have parent-children relationship,
but the PATA controller only can be powered on after the SATA controller
has finished the powering on.

If we enabled the async noirq(), then the below error is hit during noirq
phase:
pata_jmicron :02:00.1: Refused to change power state, currently in D3

Here for JMicron chip 363/361, we need forcedly to disable the async method.

Bug detail: https://bugzilla.kernel.org/show_bug.cgi?id=81551

Reported-by: Jay mymailcl...@t-online.de
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/ata/ahci.c |   11 +++
 drivers/ata/pata_jmicron.c |   11 +++
 2 files changed, 22 insertions(+)

diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
index a29f801..f5634cd 100644
--- a/drivers/ata/ahci.c
+++ b/drivers/ata/ahci.c
@@ -1329,6 +1329,17 @@ static int ahci_init_one(struct pci_dev *pdev, const 
struct pci_device_id *ent)
else if (pdev-vendor == 0x1c44  pdev-device == 0x8000)
ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
 
+   /* The JMicron chip 361/363 contains one SATA controller and
+* one PATA controller,for powering on these both controllers,
+* we must follow the sequence one by one, otherwise one of them
+* can not be powered on successfully.
+* So here we disabled the async suspend method for these chips.
+   */
+   if (pdev-vendor == PCI_VENDOR_ID_JMICRON 
+   (pdev-device == PCI_DEVICE_ID_JMICRON_JMB363 ||
+   pdev-device == PCI_DEVICE_ID_JMICRON_JMB361))
+   device_disable_async_suspend(pdev-dev);
+
/* acquire resources */
rc = pcim_enable_device(pdev);
if (rc)
diff --git a/drivers/ata/pata_jmicron.c b/drivers/ata/pata_jmicron.c
index 4d1a5d2..6b7aa77 100644
--- a/drivers/ata/pata_jmicron.c
+++ b/drivers/ata/pata_jmicron.c
@@ -143,6 +143,17 @@ static int jmicron_init_one (struct pci_dev *pdev, const 
struct pci_device_id *i
};
const struct ata_port_info *ppi[] = { info, NULL };
 
+   /* The JMicron chip 361/363 contains one SATA controller and
+* one PATA controller,for powering on these both controllers,
+* we must follow the sequence one by one, otherwise one of them
+* can not be powered on successfully.
+* So here we disabled the async suspend method for these chips.
+*/
+   if (pdev-vendor == PCI_VENDOR_ID_JMICRON 
+   (pdev-device == PCI_DEVICE_ID_JMICRON_JMB363 ||
+   pdev-device == PCI_DEVICE_ID_JMICRON_JMB361))
+   device_disable_async_suspend(pdev-dev);
+
return ata_pci_bmdma_init_one(pdev, ppi, jmicron_sht, NULL, 0);
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-08-18 Thread Chuansheng Liu
Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski 
Signed-off-by: Chuansheng Liu 
---
 include/linux/sched.h |1 +
 kernel/sched/core.c   |   16 
 2 files changed, 17 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 857ba40..3f89ac1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..adf104f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1620,6 +1620,22 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (set_nr_if_polling(rq->idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(>lock, flags);
+   if (rq->curr == rq->idle)
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(>lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] smp: Adding new function wake_up_all_cpus()

2014-08-18 Thread Chuansheng Liu
Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.

But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.

Here adding one new function wake_up_all_cpus() to let all cpus out
of idle based on function wake_up_if_idle().

Signed-off-by: Chuansheng Liu 
---
 include/linux/smp.h |2 ++
 kernel/smp.c|   22 ++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 34347f2..1cfb1fd 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -100,6 +100,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void wake_up_all_cpus(void);
 
 /*
  * Generic and arch helpers
@@ -148,6 +149,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void wake_up_all_cpus(void) {  }
 
 #endif /* !SMP */
 
diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..1e69507 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "smpboot.h"
 
@@ -699,3 +700,24 @@ void kick_all_cpus_sync(void)
smp_call_function(do_nothing, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
+
+/**
+ * wake_up_all_cpus - break all cpus out of idle
+ * wake_up_all_cpus try to break all cpus which is in idle state even
+ * including idle polling cpus, for non-idle cpus, we will do nothing
+ * for them.
+ */
+void wake_up_all_cpus(void)
+{
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(wake_up_all_cpus);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] cpuidle: Using the wake_up_all_cpus() to wake up all idle cpus

2014-08-18 Thread Chuansheng Liu
Currently kick_all_cpus_sync() or smp_call_function() can not
break the polling idle cpu immediately.

Here using wake_up_all_cpus() which can wake up the polling idle
cpu quickly is much helpful for power.

Signed-off-by: Chuansheng Liu 
---
 drivers/cpuidle/cpuidle.c |9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..56d5c4d 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -223,7 +223,7 @@ void cpuidle_uninstall_idle_handler(void)
 {
if (enabled_devices) {
initialized = 0;
-   kick_all_cpus_sync();
+   wake_up_all_cpus();
}
 }
 
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   wake_up_all_cpus();
return NOTIFY_OK;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] cpuidle: Using the wake_up_all_cpus() to wake up all idle cpus

2014-08-18 Thread Chuansheng Liu
Currently kick_all_cpus_sync() or smp_call_function() can not
break the polling idle cpu immediately.

Here using wake_up_all_cpus() which can wake up the polling idle
cpu quickly is much helpful for power.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/cpuidle/cpuidle.c |9 ++---
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..56d5c4d 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -223,7 +223,7 @@ void cpuidle_uninstall_idle_handler(void)
 {
if (enabled_devices) {
initialized = 0;
-   kick_all_cpus_sync();
+   wake_up_all_cpus();
}
 }
 
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   wake_up_all_cpus();
return NOTIFY_OK;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] smp: Adding new function wake_up_all_cpus()

2014-08-18 Thread Chuansheng Liu
Currently kick_all_cpus_sync() can break non-polling idle cpus
thru IPI interrupts.

But sometimes we need to break the polling idle cpus immediately
to reselect the suitable c-state, also for non-idle cpus, we need
to do nothing if we try to wake up them.

Here adding one new function wake_up_all_cpus() to let all cpus out
of idle based on function wake_up_if_idle().

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 include/linux/smp.h |2 ++
 kernel/smp.c|   22 ++
 2 files changed, 24 insertions(+)

diff --git a/include/linux/smp.h b/include/linux/smp.h
index 34347f2..1cfb1fd 100644
--- a/include/linux/smp.h
+++ b/include/linux/smp.h
@@ -100,6 +100,7 @@ int smp_call_function_any(const struct cpumask *mask,
  smp_call_func_t func, void *info, int wait);
 
 void kick_all_cpus_sync(void);
+void wake_up_all_cpus(void);
 
 /*
  * Generic and arch helpers
@@ -148,6 +149,7 @@ smp_call_function_any(const struct cpumask *mask, 
smp_call_func_t func,
 }
 
 static inline void kick_all_cpus_sync(void) {  }
+static inline void wake_up_all_cpus(void) {  }
 
 #endif /* !SMP */
 
diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..1e69507 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include linux/gfp.h
 #include linux/smp.h
 #include linux/cpu.h
+#include linux/sched.h
 
 #include smpboot.h
 
@@ -699,3 +700,24 @@ void kick_all_cpus_sync(void)
smp_call_function(do_nothing, NULL, 1);
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
+
+/**
+ * wake_up_all_cpus - break all cpus out of idle
+ * wake_up_all_cpus try to break all cpus which is in idle state even
+ * including idle polling cpus, for non-idle cpus, we will do nothing
+ * for them.
+ */
+void wake_up_all_cpus(void)
+{
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
+}
+EXPORT_SYMBOL_GPL(wake_up_all_cpus);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-08-18 Thread Chuansheng Liu
Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski l...@amacapital.net
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 include/linux/sched.h |1 +
 kernel/sched/core.c   |   16 
 2 files changed, 17 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 857ba40..3f89ac1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..adf104f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1620,6 +1620,22 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (set_nr_if_polling(rq-idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(rq-lock, flags);
+   if (rq-curr == rq-idle)
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(rq-lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-08-15 Thread Chuansheng Liu
Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski 
Signed-off-by: Chuansheng Liu 
---
 include/linux/sched.h |1 +
 kernel/sched/core.c   |   16 
 2 files changed, 17 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 857ba40..3f89ac1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..adf104f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1620,6 +1620,22 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (set_nr_if_polling(rq->idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(>lock, flags);
+   if (rq->curr == rq->idle)
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(>lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] smp: re-implement the kick_all_cpus_sync() with wake_up_if_idle()

2014-08-15 Thread Chuansheng Liu
Currently using smp_call_function() just woke up the corresponding
cpu, but can not break the polling idle loop.

Here using the new sched API wake_up_if_idle() to implement it.

Signed-off-by: Chuansheng Liu 
---
 kernel/smp.c |   18 +++---
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..0b647c3 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "smpboot.h"
 
@@ -677,10 +678,6 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void 
*info),
 }
 EXPORT_SYMBOL(on_each_cpu_cond);
 
-static void do_nothing(void *unused)
-{
-}
-
 /**
  * kick_all_cpus_sync - Force all cpus out of idle
  *
@@ -694,8 +691,15 @@ static void do_nothing(void *unused)
  */
 void kick_all_cpus_sync(void)
 {
-   /* Make sure the change is visible before we kick the cpus */
-   smp_mb();
-   smp_call_function(do_nothing, NULL, 1);
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] cpuidle: Using the kick_all_cpus_sync() to wake up all cpus

2014-08-15 Thread Chuansheng Liu
Current latency notify callback has the same function with
kick_all_cpus_sync().

Here use it directly to remove the redundant code.

Signed-off-by: Chuansheng Liu 
---
 drivers/cpuidle/cpuidle.c |7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..7827375 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   kick_all_cpus_sync();
return NOTIFY_OK;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] smp: re-implement the kick_all_cpus_sync() with wake_up_if_idle()

2014-08-15 Thread Chuansheng Liu
Currently using smp_call_function() just woke up the corresponding
cpu, but can not break the polling idle loop.

Here using the new sched API wake_up_if_idle() to implement it.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 kernel/smp.c |   18 +++---
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/kernel/smp.c b/kernel/smp.c
index aff8aa1..0b647c3 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -13,6 +13,7 @@
 #include linux/gfp.h
 #include linux/smp.h
 #include linux/cpu.h
+#include linux/sched.h
 
 #include smpboot.h
 
@@ -677,10 +678,6 @@ void on_each_cpu_cond(bool (*cond_func)(int cpu, void 
*info),
 }
 EXPORT_SYMBOL(on_each_cpu_cond);
 
-static void do_nothing(void *unused)
-{
-}
-
 /**
  * kick_all_cpus_sync - Force all cpus out of idle
  *
@@ -694,8 +691,15 @@ static void do_nothing(void *unused)
  */
 void kick_all_cpus_sync(void)
 {
-   /* Make sure the change is visible before we kick the cpus */
-   smp_mb();
-   smp_call_function(do_nothing, NULL, 1);
+   int cpu;
+
+   preempt_disable();
+   for_each_online_cpu(cpu) {
+   if (cpu == smp_processor_id())
+   continue;
+
+   wake_up_if_idle(cpu);
+   }
+   preempt_enable();
 }
 EXPORT_SYMBOL_GPL(kick_all_cpus_sync);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] cpuidle: Using the kick_all_cpus_sync() to wake up all cpus

2014-08-15 Thread Chuansheng Liu
Current latency notify callback has the same function with
kick_all_cpus_sync().

Here use it directly to remove the redundant code.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/cpuidle/cpuidle.c |7 +--
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..7827375 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -530,11 +530,6 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 #ifdef CONFIG_SMP
 
-static void smp_callback(void *v)
-{
-   /* we already woke the CPU up, nothing more to do */
-}
-
 /*
  * This function gets called when a part of the kernel has a new latency
  * requirement.  This means we need to get all processors out of their C-state,
@@ -544,7 +539,7 @@ static void smp_callback(void *v)
 static int cpuidle_latency_notify(struct notifier_block *b,
unsigned long l, void *v)
 {
-   smp_call_function(smp_callback, NULL, 1);
+   kick_all_cpus_sync();
return NOTIFY_OK;
 }
 
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] sched: Add new API wake_up_if_idle() to wake up the idle cpu

2014-08-15 Thread Chuansheng Liu
Implementing one new API wake_up_if_idle(), which is used to
wake up the idle CPU.

Suggested-by: Andy Lutomirski l...@amacapital.net
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 include/linux/sched.h |1 +
 kernel/sched/core.c   |   16 
 2 files changed, 17 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 857ba40..3f89ac1 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1024,6 +1024,7 @@ struct sched_domain_topology_level {
 extern struct sched_domain_topology_level *sched_domain_topology;
 
 extern void set_sched_topology(struct sched_domain_topology_level *tl);
+extern void wake_up_if_idle(int cpu);
 
 #ifdef CONFIG_SCHED_DEBUG
 # define SD_INIT_NAME(type).name = #type
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1211575..adf104f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1620,6 +1620,22 @@ static void ttwu_queue_remote(struct task_struct *p, int 
cpu)
}
 }
 
+void wake_up_if_idle(int cpu)
+{
+   struct rq *rq = cpu_rq(cpu);
+   unsigned long flags;
+
+   if (set_nr_if_polling(rq-idle)) {
+   trace_sched_wake_idle_without_ipi(cpu);
+   } else {
+   raw_spin_lock_irqsave(rq-lock, flags);
+   if (rq-curr == rq-idle)
+   smp_send_reschedule(cpu);
+   /* Else cpu is not in idle, do nothing here */
+   raw_spin_unlock_irqrestore(rq-lock, flags);
+   }
+}
+
 bool cpus_share_cache(int this_cpu, int that_cpu)
 {
return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] cpuidle: Fix the CPU stuck at C0 for 2-3s after PM_QOS back to DEFAULT

2014-08-13 Thread Chuansheng Liu
We found sometimes even after we let PM_QOS back to DEFAULT,
the CPU still stuck at C0 for 2-3s, don't do the new suitable C-state
selection immediately after received the IPI interrupt.

The code model is simply like below:
{
pm_qos_update_request(_qos, C1 - 1);
< == Here keep all cores at C0
...;
pm_qos_update_request(_qos, PM_QOS_DEFAULT_VALUE);
< == Here some cores still stuck at C0 for 2-3s
}

The reason is when pm_qos come back to DEFAULT, there is IPI interrupt to
wake up the core, but when core is in poll idle state, the IPI interrupt
can not break the polling loop.

So here in the IPI callback interrupt, when currently the idle task is
running, we need to forcedly set reschedule bit to break the polling loop,
as for other non-polling idle state, IPI interrupt can break them directly,
and setting reschedule bit has no harm for them too.

With this fix, we saved about 30mV power in our android platform.

Signed-off-by: Chuansheng Liu 
---
 drivers/cpuidle/cpuidle.c |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..9e28a13 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -532,7 +532,13 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 static void smp_callback(void *v)
 {
-   /* we already woke the CPU up, nothing more to do */
+   /* we already woke the CPU up, and when the corresponding
+* CPU is at polling idle state, we need to set the sched
+* bit to trigger reselect the new suitable C-state, it
+* will be helpful for power.
+   */
+   if (is_idle_task(current))
+   set_tsk_need_resched(current);
 }
 
 /*
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] cpuidle: Fix the CPU stuck at C0 for 2-3s after PM_QOS back to DEFAULT

2014-08-13 Thread Chuansheng Liu
We found sometimes even after we let PM_QOS back to DEFAULT,
the CPU still stuck at C0 for 2-3s, don't do the new suitable C-state
selection immediately after received the IPI interrupt.

The code model is simply like below:
{
pm_qos_update_request(pm_qos, C1 - 1);
 == Here keep all cores at C0
...;
pm_qos_update_request(pm_qos, PM_QOS_DEFAULT_VALUE);
 == Here some cores still stuck at C0 for 2-3s
}

The reason is when pm_qos come back to DEFAULT, there is IPI interrupt to
wake up the core, but when core is in poll idle state, the IPI interrupt
can not break the polling loop.

So here in the IPI callback interrupt, when currently the idle task is
running, we need to forcedly set reschedule bit to break the polling loop,
as for other non-polling idle state, IPI interrupt can break them directly,
and setting reschedule bit has no harm for them too.

With this fix, we saved about 30mV power in our android platform.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/cpuidle/cpuidle.c |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index ee9df5e..9e28a13 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -532,7 +532,13 @@ EXPORT_SYMBOL_GPL(cpuidle_register);
 
 static void smp_callback(void *v)
 {
-   /* we already woke the CPU up, nothing more to do */
+   /* we already woke the CPU up, and when the corresponding
+* CPU is at polling idle state, we need to set the sched
+* bit to trigger reselect the new suitable C-state, it
+* will be helpful for power.
+   */
+   if (is_idle_task(current))
+   set_tsk_need_resched(current);
 }
 
 /*
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2] usb: gadget: return the right length in ffs_epfile_io()

2014-03-03 Thread Chuansheng Liu
When the request length is aligned to maxpacketsize, sometimes
the return length ret > the user space requested len.

At that time, we will use min_t(size_t, ret, len) to limit the
size in case of user data buffer overflow.

But we need return the min_t(size_t, ret, len) to tell the user
space rightly also.

Acked-by: Michal Nazarewicz 
Reviewed-by: David Cohen 
Signed-off-by: Chuansheng Liu 
---
 drivers/usb/gadget/f_fs.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c
index 42f7a0e..780f877 100644
--- a/drivers/usb/gadget/f_fs.c
+++ b/drivers/usb/gadget/f_fs.c
@@ -845,12 +845,14 @@ static ssize_t ffs_epfile_io(struct file *file, struct 
ffs_io_data *io_data)
 * we may end up with more data then user space has
 * space for.
 */
-   ret = ep->status;
-   if (io_data->read && ret > 0 &&
-   unlikely(copy_to_user(io_data->buf, data,
- min_t(size_t, ret,
- io_data->len
-   ret = -EFAULT;
+   ret = ep->status;
+   if (io_data->read && ret > 0) {
+   ret = min_t(size_t, ret, io_data->len);
+
+   if (unlikely(copy_to_user(io_data->buf,
+   data, ret)))
+   ret = -EFAULT;
+   }
}
kfree(data);
}
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2] usb: gadget: return the right length in ffs_epfile_io()

2014-03-03 Thread Chuansheng Liu
When the request length is aligned to maxpacketsize, sometimes
the return length ret  the user space requested len.

At that time, we will use min_t(size_t, ret, len) to limit the
size in case of user data buffer overflow.

But we need return the min_t(size_t, ret, len) to tell the user
space rightly also.

Acked-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: David Cohen david.a.co...@linux.intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/usb/gadget/f_fs.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c
index 42f7a0e..780f877 100644
--- a/drivers/usb/gadget/f_fs.c
+++ b/drivers/usb/gadget/f_fs.c
@@ -845,12 +845,14 @@ static ssize_t ffs_epfile_io(struct file *file, struct 
ffs_io_data *io_data)
 * we may end up with more data then user space has
 * space for.
 */
-   ret = ep-status;
-   if (io_data-read  ret  0 
-   unlikely(copy_to_user(io_data-buf, data,
- min_t(size_t, ret,
- io_data-len
-   ret = -EFAULT;
+   ret = ep-status;
+   if (io_data-read  ret  0) {
+   ret = min_t(size_t, ret, io_data-len);
+
+   if (unlikely(copy_to_user(io_data-buf,
+   data, ret)))
+   ret = -EFAULT;
+   }
}
kfree(data);
}
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:irq/urgent] genirq: Remove racy waitqueue_active check

2014-02-27 Thread tip-bot for Chuansheng Liu
Commit-ID:  c685689fd24d310343ac33942e9a54a974ae9c43
Gitweb: http://git.kernel.org/tip/c685689fd24d310343ac33942e9a54a974ae9c43
Author: Chuansheng Liu 
AuthorDate: Mon, 24 Feb 2014 11:29:50 +0800
Committer:  Thomas Gleixner 
CommitDate: Thu, 27 Feb 2014 10:54:16 +0100

genirq: Remove racy waitqueue_active check

We hit one rare case below:

T1 calling disable_irq(), but hanging at synchronize_irq()
always;
The corresponding irq thread is in sleeping state;
And all CPUs are in idle state;

After analysis, we found there is one possible scenerio which
causes T1 is waiting there forever:
CPU0   CPU1
 synchronize_irq()
  wait_event()
spin_lock()
   atomic_dec_and_test(_active)
  insert the __wait into queue
spin_unlock()
   if(waitqueue_active)
atomic_read(_active)
 wake_up()

Here after inserted the __wait into queue on CPU0, and before
test if queue is empty on CPU1, there is no barrier, it maybe
cause it is not visible for CPU1 immediately, although CPU0 has
updated the queue list.
It is similar for CPU0 atomic_read() threads_active also.

So we'd need one smp_mb() before waitqueue_active.that, but removing
the waitqueue_active() check solves it as wel l and it makes
things simple and clear.

Signed-off-by: Chuansheng Liu 
Cc: Xiaoming Wang 
Link: 
http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng@intel.com
Cc: sta...@vger.kernel.org
Signed-off-by: Thomas Gleixner 
---
 kernel/irq/manage.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..d3bf660 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -802,8 +802,7 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
 
 static void wake_threads_waitq(struct irq_desc *desc)
 {
-   if (atomic_dec_and_test(>threads_active) &&
-   waitqueue_active(>wait_for_threads))
+   if (atomic_dec_and_test(>threads_active))
wake_up(>wait_for_threads);
 }
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:irq/urgent] genirq: Remove racy waitqueue_active check

2014-02-27 Thread tip-bot for Chuansheng Liu
Commit-ID:  c685689fd24d310343ac33942e9a54a974ae9c43
Gitweb: http://git.kernel.org/tip/c685689fd24d310343ac33942e9a54a974ae9c43
Author: Chuansheng Liu chuansheng@intel.com
AuthorDate: Mon, 24 Feb 2014 11:29:50 +0800
Committer:  Thomas Gleixner t...@linutronix.de
CommitDate: Thu, 27 Feb 2014 10:54:16 +0100

genirq: Remove racy waitqueue_active check

We hit one rare case below:

T1 calling disable_irq(), but hanging at synchronize_irq()
always;
The corresponding irq thread is in sleeping state;
And all CPUs are in idle state;

After analysis, we found there is one possible scenerio which
causes T1 is waiting there forever:
CPU0   CPU1
 synchronize_irq()
  wait_event()
spin_lock()
   atomic_dec_and_test(threads_active)
  insert the __wait into queue
spin_unlock()
   if(waitqueue_active)
atomic_read(threads_active)
 wake_up()

Here after inserted the __wait into queue on CPU0, and before
test if queue is empty on CPU1, there is no barrier, it maybe
cause it is not visible for CPU1 immediately, although CPU0 has
updated the queue list.
It is similar for CPU0 atomic_read() threads_active also.

So we'd need one smp_mb() before waitqueue_active.that, but removing
the waitqueue_active() check solves it as wel l and it makes
things simple and clear.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
Cc: Xiaoming Wang xiaoming.w...@intel.com
Link: 
http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng@intel.com
Cc: sta...@vger.kernel.org
Signed-off-by: Thomas Gleixner t...@linutronix.de
---
 kernel/irq/manage.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..d3bf660 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -802,8 +802,7 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
 
 static void wake_threads_waitq(struct irq_desc *desc)
 {
-   if (atomic_dec_and_test(desc-threads_active) 
-   waitqueue_active(desc-wait_for_threads))
+   if (atomic_dec_and_test(desc-threads_active))
wake_up(desc-wait_for_threads);
 }
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] usb: gadget: return the right length in ffs_epfile_io()

2014-02-26 Thread Chuansheng Liu
When the request length is aligned to maxpacketsize, sometimes
the return length ret > the user space requested len.

At that time, we will use min_t(size_t, ret, len) to limit the
size in case of user data buffer overflow.

But we need return the min_t(size_t, ret, len) to tell the user
space rightly also.

Signed-off-by: Chuansheng Liu 
---
 drivers/usb/gadget/f_fs.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c
index 2b43343..31ee7af 100644
--- a/drivers/usb/gadget/f_fs.c
+++ b/drivers/usb/gadget/f_fs.c
@@ -687,10 +687,12 @@ static ssize_t ffs_epfile_io(struct file *file,
 * space for.
 */
ret = ep->status;
-   if (read && ret > 0 &&
-   unlikely(copy_to_user(buf, data,
- min_t(size_t, ret, len
-   ret = -EFAULT;
+   if (read && ret > 0) {
+   ret = min_t(size_t, ret, len);
+
+   if (unlikely(copy_to_user(buf, data, ret)))
+   ret = -EFAULT;
+   }
}
}
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] usb: gadget: return the right length in ffs_epfile_io()

2014-02-26 Thread Chuansheng Liu
When the request length is aligned to maxpacketsize, sometimes
the return length ret  the user space requested len.

At that time, we will use min_t(size_t, ret, len) to limit the
size in case of user data buffer overflow.

But we need return the min_t(size_t, ret, len) to tell the user
space rightly also.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/usb/gadget/f_fs.c | 10 ++
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c
index 2b43343..31ee7af 100644
--- a/drivers/usb/gadget/f_fs.c
+++ b/drivers/usb/gadget/f_fs.c
@@ -687,10 +687,12 @@ static ssize_t ffs_epfile_io(struct file *file,
 * space for.
 */
ret = ep-status;
-   if (read  ret  0 
-   unlikely(copy_to_user(buf, data,
- min_t(size_t, ret, len
-   ret = -EFAULT;
+   if (read  ret  0) {
+   ret = min_t(size_t, ret, len);
+
+   if (unlikely(copy_to_user(buf, data, ret)))
+   ret = -EFAULT;
+   }
}
}
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] genirq: Fix the possible synchronize_irq() wait-forever

2014-02-23 Thread Chuansheng Liu
We hit one rare case below:
T1 calling disable_irq(), but hanging at synchronize_irq()
always;
The corresponding irq thread is in sleeping state;
And all CPUs are in idle state;

After analysis, we found there is one possible scenerio which
causes T1 is waiting there forever:
CPU0   CPU1
 synchronize_irq()
  wait_event()
spin_lock()
   atomic_dec_and_test(_active)
  insert the __wait into queue
spin_unlock()
   if(waitqueue_active)
atomic_read(_active)
 wait_up()

Here after inserted the __wait into queue on CPU0, and before
test if queue is empty on CPU1, there is no barrier, it maybe
cause it is not visible for CPU1 immediately, although CPU0 has
updated the queue list.
It is similar for CPU0 atomic_read() threads_active also.

So we need one smp_mb() before waitqueue_active or something like
that.

Thomas shared one good option that removing waitqueue_active()
judgement directly, it will make things to be simple and clear.

Cc: Thomas Gleixner 
Cc: Xiaoming Wang 
Signed-off-by: Chuansheng Liu 
---
 kernel/irq/manage.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..d3bf660 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -802,8 +802,7 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
 
 static void wake_threads_waitq(struct irq_desc *desc)
 {
-   if (atomic_dec_and_test(>threads_active) &&
-   waitqueue_active(>wait_for_threads))
+   if (atomic_dec_and_test(>threads_active))
wake_up(>wait_for_threads);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] genirq: Fix the possible synchronize_irq() wait-forever

2014-02-23 Thread Chuansheng Liu
We hit one rare case below:
T1 calling disable_irq(), but hanging at synchronize_irq()
always;
The corresponding irq thread is in sleeping state;
And all CPUs are in idle state;

After analysis, we found there is one possible scenerio which
causes T1 is waiting there forever:
CPU0   CPU1
 synchronize_irq()
  wait_event()
spin_lock()
   atomic_dec_and_test(threads_active)
  insert the __wait into queue
spin_unlock()
   if(waitqueue_active)
atomic_read(threads_active)
 wait_up()

Here after inserted the __wait into queue on CPU0, and before
test if queue is empty on CPU1, there is no barrier, it maybe
cause it is not visible for CPU1 immediately, although CPU0 has
updated the queue list.
It is similar for CPU0 atomic_read() threads_active also.

So we need one smp_mb() before waitqueue_active or something like
that.

Thomas shared one good option that removing waitqueue_active()
judgement directly, it will make things to be simple and clear.

Cc: Thomas Gleixner t...@linutronix.de
Cc: Xiaoming Wang xiaoming.w...@intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 kernel/irq/manage.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..d3bf660 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -802,8 +802,7 @@ static irqreturn_t irq_thread_fn(struct irq_desc *desc,
 
 static void wake_threads_waitq(struct irq_desc *desc)
 {
-   if (atomic_dec_and_test(desc-threads_active) 
-   waitqueue_active(desc-wait_for_threads))
+   if (atomic_dec_and_test(desc-threads_active))
wake_up(desc-wait_for_threads);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:irq/core] genirq: Update the a comment typo

2014-02-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  b04c644e670f79417f1728e6be310cfd8e6a921b
Gitweb: http://git.kernel.org/tip/b04c644e670f79417f1728e6be310cfd8e6a921b
Author: Chuansheng Liu 
AuthorDate: Mon, 10 Feb 2014 16:13:57 +0800
Committer:  Thomas Gleixner 
CommitDate: Wed, 19 Feb 2014 17:26:34 +0100

genirq: Update the a comment typo

Change the comment "chasnge" to "change".

Signed-off-by: Chuansheng Liu 
Link: 
http://lkml.kernel.org/r/1392020037-5484-2-git-send-email-chuansheng@intel.com
Signed-off-by: Thomas Gleixner 
---
 kernel/irq/manage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 54eb5c9..ada0c54 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -757,7 +757,7 @@ out_unlock:
 
 #ifdef CONFIG_SMP
 /*
- * Check whether we need to chasnge the affinity of the interrupt thread.
+ * Check whether we need to change the affinity of the interrupt thread.
  */
 static void
 irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:irq/core] genirq: Update the a comment typo

2014-02-19 Thread tip-bot for Chuansheng Liu
Commit-ID:  b04c644e670f79417f1728e6be310cfd8e6a921b
Gitweb: http://git.kernel.org/tip/b04c644e670f79417f1728e6be310cfd8e6a921b
Author: Chuansheng Liu chuansheng@intel.com
AuthorDate: Mon, 10 Feb 2014 16:13:57 +0800
Committer:  Thomas Gleixner t...@linutronix.de
CommitDate: Wed, 19 Feb 2014 17:26:34 +0100

genirq: Update the a comment typo

Change the comment chasnge to change.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
Link: 
http://lkml.kernel.org/r/1392020037-5484-2-git-send-email-chuansheng@intel.com
Signed-off-by: Thomas Gleixner t...@linutronix.de
---
 kernel/irq/manage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 54eb5c9..ada0c54 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -757,7 +757,7 @@ out_unlock:
 
 #ifdef CONFIG_SMP
 /*
- * Check whether we need to chasnge the affinity of the interrupt thread.
+ * Check whether we need to change the affinity of the interrupt thread.
  */
 static void
 irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 5/5] PM / sleep: Asynchronous threads for suspend_late

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.

This patch is for suspend_late phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 66 ++-
 1 file changed, 54 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 9335b32..42355e4 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1127,16 +1127,26 @@ static int dpm_suspend_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_suspend_late(struct device *dev, pm_message_t state)
+static int __device_suspend_late(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
 
__pm_runtime_disable(dev, false);
 
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
+
if (dev->power.syscore)
-   return 0;
+   goto Complete;
+
+   dpm_wait_for_children(dev, async);
 
if (dev->pm_domain) {
info = "late power domain ";
@@ -1160,10 +1170,40 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev->power.is_late_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(>power.completion);
return error;
 }
 
+static void async_suspend_late(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_late(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition, " async", error);
+   }
+   put_device(dev);
+}
+
+static int device_suspend_late(struct device *dev)
+{
+   reinit_completion(>power.completion);
+
+   if (pm_async_enabled && dev->power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_late, dev);
+   return 0;
+   }
+
+   return __device_suspend_late(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_late - Execute "late suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1174,19 +1214,20 @@ static int dpm_suspend_late(pm_message_t state)
int error = 0;
 
mutex_lock(_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(_suspended_list)) {
struct device *dev = to_device(dpm_suspended_list.prev);
 
get_device(dev);
mutex_unlock(_list_mtx);
 
-   error = device_suspend_late(dev, state);
+   error = device_suspend_late(dev);
 
mutex_lock(_list_mtx);
if (error) {
pm_dev_err(dev, state, " late", error);
-   suspend_stats.failed_suspend_late++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1195,17 +1236,18 @@ static int dpm_suspend_late(pm_message_t state)
list_move(>power.entry, _late_early_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (error) {
+   suspend_stats.failed_suspend_late++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, "late");
-
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 3/5] PM / sleep: Asynchronous threads for resume_early

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.

This patch is for resume_early phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 55 +--
 1 file changed, 44 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index ea3f1d2..6d41165 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -595,7 +595,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int device_resume_early(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -610,6 +610,8 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (!dev->power.is_late_suspended)
goto Out;
 
+   dpm_wait(dev->parent, async);
+
if (dev->pm_domain) {
info = "early power domain ";
callback = pm_late_early_op(>pm_domain->ops, state);
@@ -636,38 +638,69 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
TRACE_RESUME(error);
 
pm_runtime_enable(dev);
+   complete_all(>power.completion);
return error;
 }
 
+static void async_resume_early(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_early(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition, " async", error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_early - Execute "early resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
 static void dpm_resume_early(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(_list_mtx);
-   while (!list_empty(_late_early_list)) {
-   struct device *dev = to_device(dpm_late_early_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, _late_early_list, power.entry) {
+   reinit_completion(>power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_early, dev);
+   }
+   }
 
+   while (!list_empty(_late_early_list)) {
+   dev = to_device(dpm_late_early_list.next);
get_device(dev);
list_move_tail(>power.entry, _suspended_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_early(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_early++;
-   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " early", error);
-   }
+   if (!is_async(dev)) {
+   int error;
 
+   error = device_resume_early(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_early++;
+   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " early", error);
+   }
+   }
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "early");
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 4/5] PM / sleep: Asynchronous threads for suspend_noirq

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.

This patch is for suspend_noirq phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 68 +++
 1 file changed, 57 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 6d41165..9335b32 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -990,14 +990,24 @@ static pm_message_t resume_event(pm_message_t sleep_state)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_suspend_noirq(struct device *dev, pm_message_t state)
+static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
 
if (dev->power.syscore)
-   return 0;
+   goto Complete;
+
+   dpm_wait_for_children(dev, async);
 
if (dev->pm_domain) {
info = "noirq power domain ";
@@ -1021,10 +1031,40 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev->power.is_noirq_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(>power.completion);
return error;
 }
 
+static void async_suspend_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_noirq(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition, " async", error);
+   }
+
+   put_device(dev);
+}
+
+static int device_suspend_noirq(struct device *dev)
+{
+   reinit_completion(>power.completion);
+
+   if (pm_async_enabled && dev->power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_noirq, dev);
+   return 0;
+   }
+   return __device_suspend_noirq(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1040,19 +1080,20 @@ static int dpm_suspend_noirq(pm_message_t state)
cpuidle_pause();
suspend_device_irqs();
mutex_lock(_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(_late_early_list)) {
struct device *dev = to_device(dpm_late_early_list.prev);
 
get_device(dev);
mutex_unlock(_list_mtx);
 
-   error = device_suspend_noirq(dev, state);
+   error = device_suspend_noirq(dev);
 
mutex_lock(_list_mtx);
if (error) {
pm_dev_err(dev, state, " noirq", error);
-   suspend_stats.failed_suspend_noirq++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1061,16 +1102,21 @@ static int dpm_suspend_noirq(pm_message_t state)
list_move(>power.entry, _noirq_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (!error)
+   error = async_error;
+
+   if (error) {
+   suspend_stats.failed_suspend_noirq++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, "noirq");
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 1/5] PM / sleep: Two flags for async suspend_noirq and suspend_late

2014-02-17 Thread Chuansheng Liu
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 24 ++--
 include/linux/pm.h|  2 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..00c53eb 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -91,6 +91,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev->power.is_prepared = false;
dev->power.is_suspended = false;
+   dev->power.is_noirq_suspended = false;
+   dev->power.is_late_suspended = false;
init_completion(>power.completion);
complete_all(>power.completion);
dev->power.wakeup = NULL;
@@ -479,6 +481,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_noirq_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "noirq power domain ";
callback = pm_noirq_op(>pm_domain->ops, state);
@@ -499,6 +504,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -561,6 +567,9 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_late_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "early power domain ";
callback = pm_late_early_op(>pm_domain->ops, state);
@@ -581,6 +590,7 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_late_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -917,6 +927,7 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
if (dev->power.syscore)
return 0;
@@ -940,7 +951,11 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
callback = pm_noirq_op(dev->driver->pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev->power.is_noirq_suspended = true;
+
+   return error;
 }
 
 /**
@@ -1003,6 +1018,7 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
__pm_runtime_disable(dev, false);
 
@@ -1028,7 +1044,11 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
callback = pm_late_early_op(dev->driver->pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev->power.is_late_suspended = true;
+
+   return error;
 }
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a..f23a4f1 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -542,6 +542,8 @@ struct dev_pm_info {
unsigned intasync_suspend:1;
boolis_prepared:1;  /* Owned by the PM core */
boolis_suspended:1; /* Ditto */
+   boolis_noirq_suspended:1;
+   boolis_late_suspended:1;
boolignore_children:1;
boolearly_init:1;   /* Owned by the PM core */
spinlock_t  lock;
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 2/5] PM / sleep: Asynchronous threads for resume_noirq

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_noirq time
significantly.

One typical case is:
In resume_noirq phase and for the PCI devices, the function
pci_pm_resume_noirq() will be called, and there is one d3_delay
(10ms) at least.

With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 66 +++
 1 file changed, 50 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 00c53eb..ea3f1d2 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -469,7 +469,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int device_resume_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -484,6 +484,8 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (!dev->power.is_noirq_suspended)
goto Out;
 
+   dpm_wait(dev->parent, async);
+
if (dev->pm_domain) {
info = "noirq power domain ";
callback = pm_noirq_op(>pm_domain->ops, state);
@@ -507,10 +509,29 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
dev->power.is_noirq_suspended = false;
 
  Out:
+   complete_all(>power.completion);
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev->power.async_suspend && pm_async_enabled
+   && !pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition, " async", error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -520,29 +541,48 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(_list_mtx);
-   while (!list_empty(_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, _noirq_list, power.entry) {
+   reinit_completion(>power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
+   while (!list_empty(_noirq_list)) {
+   dev = to_device(dpm_noirq_list.next);
get_device(dev);
list_move_tail(>power.entry, _late_early_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " noirq", error);
+   if (!is_async(dev)) {
+   int error;
+
+   error = device_resume_noirq(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " noirq", error);
+   }
}
 
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "noirq");
resume_device_irqs();
cpuidle_resume();
@@ -742,12 +782,6 @@ static void async_resume(void *data, async_cookie_t cookie)
put_device(dev);
 }
 
-static bool is_async(struct device *dev)
-{
-   return dev->power.async_suspend && pm_async_enabled
-   && !pm_trace_is_enabled()

[PATCH v4 0/5] Enabling the asynchronous threads for other phases

2014-02-17 Thread Chuansheng Liu
Hello,

This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.

Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.

With these patches, in my test platform, it saved 80% time in resume_noirq
phase.

Best Regards,

---
V2:
 -- Based on Rafael's minor changes related to coding style, white space etc;
 -- Rafael pointed out the v1 series break the device parent-children
suspending/resuming order when enabling asyncing, here using the
dev completion to sync parent-children order;

V3:
 -- In patch v2 5/5, there is one missing "dpm_wait_for_children";

V4:
 -- Rafael pointed out that dpm_wait_for_children()/dpm_wait()
should be put after some simple returning statement;

---

[PATCH v4 1/5] PM / sleep: Two flags for async suspend_noirq and
[PATCH v4 2/5] PM / sleep: Asynchronous threads for resume_noirq
[PATCH v4 3/5] PM / sleep: Asynchronous threads for resume_early
[PATCH v4 4/5] PM / sleep: Asynchronous threads for suspend_noirq
[PATCH v4 5/5] PM / sleep: Asynchronous threads for suspend_late

 drivers/base/power/main.c | 275 
++
 include/linux/pm.h|   2 ++
 2 files changed, 227 insertions(+), 50 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 0/5] Enabling the asynchronous threads for other phases

2014-02-17 Thread Chuansheng Liu
Hello,

This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.

Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.

With these patches, in my test platform, it saved 80% time in resume_noirq
phase.

Best Regards,

---
V2:
 -- Based on Rafael's minor changes related to coding style, white space etc;
 -- Rafael pointed out the v1 series break the device parent-children
suspending/resuming order when enabling asyncing, here using the
dev completion to sync parent-children order;

V3:
 -- In patch v2 5/5, there is one missing dpm_wait_for_children;

V4:
 -- Rafael pointed out that dpm_wait_for_children()/dpm_wait()
should be put after some simple returning statement;

---

[PATCH v4 1/5] PM / sleep: Two flags for async suspend_noirq and
[PATCH v4 2/5] PM / sleep: Asynchronous threads for resume_noirq
[PATCH v4 3/5] PM / sleep: Asynchronous threads for resume_early
[PATCH v4 4/5] PM / sleep: Asynchronous threads for suspend_noirq
[PATCH v4 5/5] PM / sleep: Asynchronous threads for suspend_late

 drivers/base/power/main.c | 275 
++
 include/linux/pm.h|   2 ++
 2 files changed, 227 insertions(+), 50 deletions(-)

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 1/5] PM / sleep: Two flags for async suspend_noirq and suspend_late

2014-02-17 Thread Chuansheng Liu
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 24 ++--
 include/linux/pm.h|  2 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..00c53eb 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -91,6 +91,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev-power.is_prepared = false;
dev-power.is_suspended = false;
+   dev-power.is_noirq_suspended = false;
+   dev-power.is_late_suspended = false;
init_completion(dev-power.completion);
complete_all(dev-power.completion);
dev-power.wakeup = NULL;
@@ -479,6 +481,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_noirq_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = noirq power domain ;
callback = pm_noirq_op(dev-pm_domain-ops, state);
@@ -499,6 +504,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -561,6 +567,9 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_late_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = early power domain ;
callback = pm_late_early_op(dev-pm_domain-ops, state);
@@ -581,6 +590,7 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_late_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -917,6 +927,7 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
if (dev-power.syscore)
return 0;
@@ -940,7 +951,11 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
callback = pm_noirq_op(dev-driver-pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev-power.is_noirq_suspended = true;
+
+   return error;
 }
 
 /**
@@ -1003,6 +1018,7 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
__pm_runtime_disable(dev, false);
 
@@ -1028,7 +1044,11 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
callback = pm_late_early_op(dev-driver-pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev-power.is_late_suspended = true;
+
+   return error;
 }
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a..f23a4f1 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -542,6 +542,8 @@ struct dev_pm_info {
unsigned intasync_suspend:1;
boolis_prepared:1;  /* Owned by the PM core */
boolis_suspended:1; /* Ditto */
+   boolis_noirq_suspended:1;
+   boolis_late_suspended:1;
boolignore_children:1;
boolearly_init:1;   /* Owned by the PM core */
spinlock_t  lock;
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 2/5] PM / sleep: Asynchronous threads for resume_noirq

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_noirq time
significantly.

One typical case is:
In resume_noirq phase and for the PCI devices, the function
pci_pm_resume_noirq() will be called, and there is one d3_delay
(10ms) at least.

With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 66 +++
 1 file changed, 50 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 00c53eb..ea3f1d2 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -469,7 +469,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int device_resume_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -484,6 +484,8 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (!dev-power.is_noirq_suspended)
goto Out;
 
+   dpm_wait(dev-parent, async);
+
if (dev-pm_domain) {
info = noirq power domain ;
callback = pm_noirq_op(dev-pm_domain-ops, state);
@@ -507,10 +509,29 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
dev-power.is_noirq_suspended = false;
 
  Out:
+   complete_all(dev-power.completion);
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev-power.async_suspend  pm_async_enabled
+!pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition,  async, error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute noirq resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -520,29 +541,48 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(dpm_list_mtx);
-   while (!list_empty(dpm_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, dpm_noirq_list, power.entry) {
+   reinit_completion(dev-power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
+   while (!list_empty(dpm_noirq_list)) {
+   dev = to_device(dpm_noirq_list.next);
get_device(dev);
list_move_tail(dev-power.entry, dpm_late_early_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  noirq, error);
+   if (!is_async(dev)) {
+   int error;
+
+   error = device_resume_noirq(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  noirq, error);
+   }
}
 
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, noirq);
resume_device_irqs();
cpuidle_resume();
@@ -742,12 +782,6 @@ static void async_resume(void *data, async_cookie_t cookie)
put_device(dev);
 }
 
-static bool is_async(struct device *dev)
-{
-   return dev-power.async_suspend  pm_async_enabled
-!pm_trace_is_enabled();
-}
-
 /**
  * dpm_resume - Execute resume callbacks for non-sysdev devices.
  * @state: PM

[PATCH v4 4/5] PM / sleep: Asynchronous threads for suspend_noirq

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.

This patch is for suspend_noirq phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 68 +++
 1 file changed, 57 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 6d41165..9335b32 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -990,14 +990,24 @@ static pm_message_t resume_event(pm_message_t sleep_state)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_suspend_noirq(struct device *dev, pm_message_t state)
+static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
 
if (dev-power.syscore)
-   return 0;
+   goto Complete;
+
+   dpm_wait_for_children(dev, async);
 
if (dev-pm_domain) {
info = noirq power domain ;
@@ -1021,10 +1031,40 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev-power.is_noirq_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_suspend_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_noirq(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition,  async, error);
+   }
+
+   put_device(dev);
+}
+
+static int device_suspend_noirq(struct device *dev)
+{
+   reinit_completion(dev-power.completion);
+
+   if (pm_async_enabled  dev-power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_noirq, dev);
+   return 0;
+   }
+   return __device_suspend_noirq(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_noirq - Execute noirq suspend callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1040,19 +1080,20 @@ static int dpm_suspend_noirq(pm_message_t state)
cpuidle_pause();
suspend_device_irqs();
mutex_lock(dpm_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(dpm_late_early_list)) {
struct device *dev = to_device(dpm_late_early_list.prev);
 
get_device(dev);
mutex_unlock(dpm_list_mtx);
 
-   error = device_suspend_noirq(dev, state);
+   error = device_suspend_noirq(dev);
 
mutex_lock(dpm_list_mtx);
if (error) {
pm_dev_err(dev, state,  noirq, error);
-   suspend_stats.failed_suspend_noirq++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1061,16 +1102,21 @@ static int dpm_suspend_noirq(pm_message_t state)
list_move(dev-power.entry, dpm_noirq_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(dpm_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (!error)
+   error = async_error;
+
+   if (error) {
+   suspend_stats.failed_suspend_noirq++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, noirq);
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 3/5] PM / sleep: Asynchronous threads for resume_early

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.

This patch is for resume_early phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 55 +--
 1 file changed, 44 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index ea3f1d2..6d41165 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -595,7 +595,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int device_resume_early(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -610,6 +610,8 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (!dev-power.is_late_suspended)
goto Out;
 
+   dpm_wait(dev-parent, async);
+
if (dev-pm_domain) {
info = early power domain ;
callback = pm_late_early_op(dev-pm_domain-ops, state);
@@ -636,38 +638,69 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
TRACE_RESUME(error);
 
pm_runtime_enable(dev);
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_resume_early(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_early(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition,  async, error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_early - Execute early resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
 static void dpm_resume_early(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(dpm_list_mtx);
-   while (!list_empty(dpm_late_early_list)) {
-   struct device *dev = to_device(dpm_late_early_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, dpm_late_early_list, power.entry) {
+   reinit_completion(dev-power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_early, dev);
+   }
+   }
 
+   while (!list_empty(dpm_late_early_list)) {
+   dev = to_device(dpm_late_early_list.next);
get_device(dev);
list_move_tail(dev-power.entry, dpm_suspended_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_early(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_early++;
-   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  early, error);
-   }
+   if (!is_async(dev)) {
+   int error;
 
+   error = device_resume_early(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_early++;
+   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  early, error);
+   }
+   }
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, early);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v4 5/5] PM / sleep: Asynchronous threads for suspend_late

2014-02-17 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.

This patch is for suspend_late phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 66 ++-
 1 file changed, 54 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 9335b32..42355e4 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1127,16 +1127,26 @@ static int dpm_suspend_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_suspend_late(struct device *dev, pm_message_t state)
+static int __device_suspend_late(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
 
__pm_runtime_disable(dev, false);
 
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
+
if (dev-power.syscore)
-   return 0;
+   goto Complete;
+
+   dpm_wait_for_children(dev, async);
 
if (dev-pm_domain) {
info = late power domain ;
@@ -1160,10 +1170,40 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev-power.is_late_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_suspend_late(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_late(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition,  async, error);
+   }
+   put_device(dev);
+}
+
+static int device_suspend_late(struct device *dev)
+{
+   reinit_completion(dev-power.completion);
+
+   if (pm_async_enabled  dev-power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_late, dev);
+   return 0;
+   }
+
+   return __device_suspend_late(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_late - Execute late suspend callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1174,19 +1214,20 @@ static int dpm_suspend_late(pm_message_t state)
int error = 0;
 
mutex_lock(dpm_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(dpm_suspended_list)) {
struct device *dev = to_device(dpm_suspended_list.prev);
 
get_device(dev);
mutex_unlock(dpm_list_mtx);
 
-   error = device_suspend_late(dev, state);
+   error = device_suspend_late(dev);
 
mutex_lock(dpm_list_mtx);
if (error) {
pm_dev_err(dev, state,  late, error);
-   suspend_stats.failed_suspend_late++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1195,17 +1236,18 @@ static int dpm_suspend_late(pm_message_t state)
list_move(dev-power.entry, dpm_late_early_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(dpm_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (error) {
+   suspend_stats.failed_suspend_late++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, late);
-
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 2/5] PM / sleep: Asynchronous threads for resume_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_noirq time
significantly.

One typical case is:
In resume_noirq phase and for the PCI devices, the function
pci_pm_resume_noirq() will be called, and there is one d3_delay
(10ms) at least.

With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 66 +++
 1 file changed, 50 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 00c53eb..89172aa 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -469,7 +469,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int device_resume_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -481,6 +481,8 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   dpm_wait(dev->parent, async);
+
if (!dev->power.is_noirq_suspended)
goto Out;
 
@@ -507,10 +509,29 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
dev->power.is_noirq_suspended = false;
 
  Out:
+   complete_all(>power.completion);
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev->power.async_suspend && pm_async_enabled
+   && !pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition, " async", error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -520,29 +541,48 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(_list_mtx);
-   while (!list_empty(_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, _noirq_list, power.entry) {
+   reinit_completion(>power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
+   while (!list_empty(_noirq_list)) {
+   dev = to_device(dpm_noirq_list.next);
get_device(dev);
list_move_tail(>power.entry, _late_early_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " noirq", error);
+   if (!is_async(dev)) {
+   int error;
+
+   error = device_resume_noirq(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " noirq", error);
+   }
}
 
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "noirq");
resume_device_irqs();
cpuidle_resume();
@@ -742,12 +782,6 @@ static void async_resume(void *data, async_cookie_t cookie)
put_device(dev);
 }
 
-static bool is_async(struct device *dev)
-{
-   return dev->power.async_suspend && pm_async_enabled
-   && !pm_trace_is_enabled();
-}
-
 /**
  * dpm_resume - Execute "resume" callbacks for non-sysdev devices.

[PATCH v3 5/5] PM / sleep: Asynchronous threads for suspend_late

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.

This patch is for suspend_late phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 66 ++-
 1 file changed, 54 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 72b4c9c..c031050 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1127,16 +1127,26 @@ static int dpm_suspend_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_suspend_late(struct device *dev, pm_message_t state)
+static int __device_suspend_late(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   dpm_wait_for_children(dev, async);
 
__pm_runtime_disable(dev, false);
 
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
+
if (dev->power.syscore)
-   return 0;
+   goto Complete;
 
if (dev->pm_domain) {
info = "late power domain ";
@@ -1160,10 +1170,40 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev->power.is_late_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(>power.completion);
return error;
 }
 
+static void async_suspend_late(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_late(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition, " async", error);
+   }
+   put_device(dev);
+}
+
+static int device_suspend_late(struct device *dev)
+{
+   reinit_completion(>power.completion);
+
+   if (pm_async_enabled && dev->power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_late, dev);
+   return 0;
+   }
+
+   return __device_suspend_late(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_late - Execute "late suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1174,19 +1214,20 @@ static int dpm_suspend_late(pm_message_t state)
int error = 0;
 
mutex_lock(_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(_suspended_list)) {
struct device *dev = to_device(dpm_suspended_list.prev);
 
get_device(dev);
mutex_unlock(_list_mtx);
 
-   error = device_suspend_late(dev, state);
+   error = device_suspend_late(dev);
 
mutex_lock(_list_mtx);
if (error) {
pm_dev_err(dev, state, " late", error);
-   suspend_stats.failed_suspend_late++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1195,17 +1236,18 @@ static int dpm_suspend_late(pm_message_t state)
list_move(>power.entry, _late_early_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (error) {
+   suspend_stats.failed_suspend_late++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, "late");
-
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 3/5] PM / sleep: Asynchronous threads for resume_early

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.

This patch is for resume_early phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 55 +--
 1 file changed, 44 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 89172aa..2f2d110 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -595,7 +595,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int device_resume_early(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -607,6 +607,8 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   dpm_wait(dev->parent, async);
+
if (!dev->power.is_late_suspended)
goto Out;
 
@@ -636,38 +638,69 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
TRACE_RESUME(error);
 
pm_runtime_enable(dev);
+   complete_all(>power.completion);
return error;
 }
 
+static void async_resume_early(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_early(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition, " async", error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_early - Execute "early resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
 static void dpm_resume_early(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(_list_mtx);
-   while (!list_empty(_late_early_list)) {
-   struct device *dev = to_device(dpm_late_early_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, _late_early_list, power.entry) {
+   reinit_completion(>power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_early, dev);
+   }
+   }
 
+   while (!list_empty(_late_early_list)) {
+   dev = to_device(dpm_late_early_list.next);
get_device(dev);
list_move_tail(>power.entry, _suspended_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_early(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_early++;
-   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " early", error);
-   }
+   if (!is_async(dev)) {
+   int error;
 
+   error = device_resume_early(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_early++;
+   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " early", error);
+   }
+   }
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "early");
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 1/5] PM / sleep: Two flags for async suspend_noirq and suspend_late

2014-02-16 Thread Chuansheng Liu
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 24 ++--
 include/linux/pm.h|  2 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..00c53eb 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -91,6 +91,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev->power.is_prepared = false;
dev->power.is_suspended = false;
+   dev->power.is_noirq_suspended = false;
+   dev->power.is_late_suspended = false;
init_completion(>power.completion);
complete_all(>power.completion);
dev->power.wakeup = NULL;
@@ -479,6 +481,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_noirq_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "noirq power domain ";
callback = pm_noirq_op(>pm_domain->ops, state);
@@ -499,6 +504,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -561,6 +567,9 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_late_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "early power domain ";
callback = pm_late_early_op(>pm_domain->ops, state);
@@ -581,6 +590,7 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_late_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -917,6 +927,7 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
if (dev->power.syscore)
return 0;
@@ -940,7 +951,11 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
callback = pm_noirq_op(dev->driver->pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev->power.is_noirq_suspended = true;
+
+   return error;
 }
 
 /**
@@ -1003,6 +1018,7 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
__pm_runtime_disable(dev, false);
 
@@ -1028,7 +1044,11 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
callback = pm_late_early_op(dev->driver->pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev->power.is_late_suspended = true;
+
+   return error;
 }
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a..f23a4f1 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -542,6 +542,8 @@ struct dev_pm_info {
unsigned intasync_suspend:1;
boolis_prepared:1;  /* Owned by the PM core */
boolis_suspended:1; /* Ditto */
+   boolis_noirq_suspended:1;
+   boolis_late_suspended:1;
boolignore_children:1;
boolearly_init:1;   /* Owned by the PM core */
spinlock_t  lock;
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 4/5] PM / sleep: Asynchronous threads for suspend_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.

This patch is for suspend_noirq phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 68 +++
 1 file changed, 57 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 2f2d110..72b4c9c 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -990,14 +990,24 @@ static pm_message_t resume_event(pm_message_t sleep_state)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_suspend_noirq(struct device *dev, pm_message_t state)
+static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   dpm_wait_for_children(dev, async);
+
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
 
if (dev->power.syscore)
-   return 0;
+   goto Complete;
 
if (dev->pm_domain) {
info = "noirq power domain ";
@@ -1021,10 +1031,40 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev->power.is_noirq_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(>power.completion);
return error;
 }
 
+static void async_suspend_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_noirq(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition, " async", error);
+   }
+
+   put_device(dev);
+}
+
+static int device_suspend_noirq(struct device *dev)
+{
+   reinit_completion(>power.completion);
+
+   if (pm_async_enabled && dev->power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_noirq, dev);
+   return 0;
+   }
+   return __device_suspend_noirq(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1040,19 +1080,20 @@ static int dpm_suspend_noirq(pm_message_t state)
cpuidle_pause();
suspend_device_irqs();
mutex_lock(_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(_late_early_list)) {
struct device *dev = to_device(dpm_late_early_list.prev);
 
get_device(dev);
mutex_unlock(_list_mtx);
 
-   error = device_suspend_noirq(dev, state);
+   error = device_suspend_noirq(dev);
 
mutex_lock(_list_mtx);
if (error) {
pm_dev_err(dev, state, " noirq", error);
-   suspend_stats.failed_suspend_noirq++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1061,16 +1102,21 @@ static int dpm_suspend_noirq(pm_message_t state)
list_move(>power.entry, _noirq_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (!error)
+   error = async_error;
+
+   if (error) {
+   suspend_stats.failed_suspend_noirq++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, "noirq");
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 0/5] Enabling the asynchronous threads for other phases

2014-02-16 Thread Chuansheng Liu
Hello,

This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.

Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.

With these patches, in my test platform, it saved 80% time in resume_noirq
phase.

Best Regards,

---
V2:
 -- Based on Rafael's minor changes related to coding style, white space etc;
 -- Rafael pointed out the v1 series break the device parent-children
suspending/resuming order when enabling asyncing, here using the
dev completion to sync parent-children order;

V3:
 -- In patch v2 5/5, there is one missing "dpm_wait_for_children";

---

[PATCH v3 1/5] PM / sleep: Two flags for async suspend_noirq and
[PATCH v3 2/5] PM / sleep: Asynchronous threads for resume_noirq
[PATCH v3 3/5] PM / sleep: Asynchronous threads for resume_early
[PATCH v3 4/5] PM / sleep: Asynchronous threads for suspend_noirq
[PATCH v3 5/5] PM / sleep: Asynchronous threads for suspend_late

 drivers/base/power/main.c | 275 
++
 include/linux/pm.h|   2 ++
 2 files changed, 227 insertions(+), 50 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 5/5] PM / sleep: Asynchronous threads for suspend_late

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.

This patch is for suspend_late phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 64 ++-
 1 file changed, 52 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 72b4c9c..0c5fad0 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1127,16 +1127,24 @@ static int dpm_suspend_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_suspend_late(struct device *dev, pm_message_t state)
+static int __device_suspend_late(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
 
__pm_runtime_disable(dev, false);
 
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
+
if (dev->power.syscore)
-   return 0;
+   goto Complete;
 
if (dev->pm_domain) {
info = "late power domain ";
@@ -1160,10 +1168,40 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev->power.is_late_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(>power.completion);
return error;
 }
 
+static void async_suspend_late(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_late(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition, " async", error);
+   }
+   put_device(dev);
+}
+
+static int device_suspend_late(struct device *dev)
+{
+   reinit_completion(>power.completion);
+
+   if (pm_async_enabled && dev->power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_late, dev);
+   return 0;
+   }
+
+   return __device_suspend_late(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_late - Execute "late suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1174,19 +1212,20 @@ static int dpm_suspend_late(pm_message_t state)
int error = 0;
 
mutex_lock(_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(_suspended_list)) {
struct device *dev = to_device(dpm_suspended_list.prev);
 
get_device(dev);
mutex_unlock(_list_mtx);
 
-   error = device_suspend_late(dev, state);
+   error = device_suspend_late(dev);
 
mutex_lock(_list_mtx);
if (error) {
pm_dev_err(dev, state, " late", error);
-   suspend_stats.failed_suspend_late++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1195,17 +1234,18 @@ static int dpm_suspend_late(pm_message_t state)
list_move(>power.entry, _late_early_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (error) {
+   suspend_stats.failed_suspend_late++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, "late");
-
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 4/5] PM / sleep: Asynchronous threads for suspend_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.

This patch is for suspend_noirq phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 68 +++
 1 file changed, 57 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 2f2d110..72b4c9c 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -990,14 +990,24 @@ static pm_message_t resume_event(pm_message_t sleep_state)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_suspend_noirq(struct device *dev, pm_message_t state)
+static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   dpm_wait_for_children(dev, async);
+
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
 
if (dev->power.syscore)
-   return 0;
+   goto Complete;
 
if (dev->pm_domain) {
info = "noirq power domain ";
@@ -1021,10 +1031,40 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev->power.is_noirq_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(>power.completion);
return error;
 }
 
+static void async_suspend_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_noirq(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition, " async", error);
+   }
+
+   put_device(dev);
+}
+
+static int device_suspend_noirq(struct device *dev)
+{
+   reinit_completion(>power.completion);
+
+   if (pm_async_enabled && dev->power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_noirq, dev);
+   return 0;
+   }
+   return __device_suspend_noirq(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_noirq - Execute "noirq suspend" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1040,19 +1080,20 @@ static int dpm_suspend_noirq(pm_message_t state)
cpuidle_pause();
suspend_device_irqs();
mutex_lock(_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(_late_early_list)) {
struct device *dev = to_device(dpm_late_early_list.prev);
 
get_device(dev);
mutex_unlock(_list_mtx);
 
-   error = device_suspend_noirq(dev, state);
+   error = device_suspend_noirq(dev);
 
mutex_lock(_list_mtx);
if (error) {
pm_dev_err(dev, state, " noirq", error);
-   suspend_stats.failed_suspend_noirq++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1061,16 +1102,21 @@ static int dpm_suspend_noirq(pm_message_t state)
list_move(>power.entry, _noirq_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (!error)
+   error = async_error;
+
+   if (error) {
+   suspend_stats.failed_suspend_noirq++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, "noirq");
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 3/5] PM / sleep: Asynchronous threads for resume_early

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.

This patch is for resume_early phase.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 55 +--
 1 file changed, 44 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 89172aa..2f2d110 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -595,7 +595,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int device_resume_early(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -607,6 +607,8 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   dpm_wait(dev->parent, async);
+
if (!dev->power.is_late_suspended)
goto Out;
 
@@ -636,38 +638,69 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
TRACE_RESUME(error);
 
pm_runtime_enable(dev);
+   complete_all(>power.completion);
return error;
 }
 
+static void async_resume_early(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_early(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition, " async", error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_early - Execute "early resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
 static void dpm_resume_early(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(_list_mtx);
-   while (!list_empty(_late_early_list)) {
-   struct device *dev = to_device(dpm_late_early_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, _late_early_list, power.entry) {
+   reinit_completion(>power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_early, dev);
+   }
+   }
 
+   while (!list_empty(_late_early_list)) {
+   dev = to_device(dpm_late_early_list.next);
get_device(dev);
list_move_tail(>power.entry, _suspended_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_early(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_early++;
-   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " early", error);
-   }
+   if (!is_async(dev)) {
+   int error;
 
+   error = device_resume_early(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_early++;
+   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " early", error);
+   }
+   }
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "early");
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 2/5] PM / sleep: Asynchronous threads for resume_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_noirq time
significantly.

One typical case is:
In resume_noirq phase and for the PCI devices, the function
pci_pm_resume_noirq() will be called, and there is one d3_delay
(10ms) at least.

With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 66 +++
 1 file changed, 50 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 00c53eb..89172aa 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -469,7 +469,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int device_resume_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -481,6 +481,8 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   dpm_wait(dev->parent, async);
+
if (!dev->power.is_noirq_suspended)
goto Out;
 
@@ -507,10 +509,29 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
dev->power.is_noirq_suspended = false;
 
  Out:
+   complete_all(>power.completion);
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev->power.async_suspend && pm_async_enabled
+   && !pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition, " async", error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -520,29 +541,48 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(_list_mtx);
-   while (!list_empty(_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, _noirq_list, power.entry) {
+   reinit_completion(>power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
+   while (!list_empty(_noirq_list)) {
+   dev = to_device(dpm_noirq_list.next);
get_device(dev);
list_move_tail(>power.entry, _late_early_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " noirq", error);
+   if (!is_async(dev)) {
+   int error;
+
+   error = device_resume_noirq(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " noirq", error);
+   }
}
 
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "noirq");
resume_device_irqs();
cpuidle_resume();
@@ -742,12 +782,6 @@ static void async_resume(void *data, async_cookie_t cookie)
put_device(dev);
 }
 
-static bool is_async(struct device *dev)
-{
-   return dev->power.async_suspend && pm_async_enabled
-   && !pm_trace_is_enabled();
-}
-
 /**
  * dpm_resume - Execute "resume" callbacks for non-sysdev devices.

[PATCH v2 1/5] PM / sleep: Two flags for async suspend_noirq and suspend_late

2014-02-16 Thread Chuansheng Liu
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.

Signed-off-by: Chuansheng Liu 
---
 drivers/base/power/main.c | 24 ++--
 include/linux/pm.h|  2 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..00c53eb 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -91,6 +91,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev->power.is_prepared = false;
dev->power.is_suspended = false;
+   dev->power.is_noirq_suspended = false;
+   dev->power.is_late_suspended = false;
init_completion(>power.completion);
complete_all(>power.completion);
dev->power.wakeup = NULL;
@@ -479,6 +481,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_noirq_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "noirq power domain ";
callback = pm_noirq_op(>pm_domain->ops, state);
@@ -499,6 +504,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -561,6 +567,9 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_late_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "early power domain ";
callback = pm_late_early_op(>pm_domain->ops, state);
@@ -581,6 +590,7 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_late_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -917,6 +927,7 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
if (dev->power.syscore)
return 0;
@@ -940,7 +951,11 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
callback = pm_noirq_op(dev->driver->pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev->power.is_noirq_suspended = true;
+
+   return error;
 }
 
 /**
@@ -1003,6 +1018,7 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
__pm_runtime_disable(dev, false);
 
@@ -1028,7 +1044,11 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
callback = pm_late_early_op(dev->driver->pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev->power.is_late_suspended = true;
+
+   return error;
 }
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a..f23a4f1 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -542,6 +542,8 @@ struct dev_pm_info {
unsigned intasync_suspend:1;
boolis_prepared:1;  /* Owned by the PM core */
boolis_suspended:1; /* Ditto */
+   boolis_noirq_suspended:1;
+   boolis_late_suspended:1;
boolignore_children:1;
boolearly_init:1;   /* Owned by the PM core */
spinlock_t  lock;
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 0/5] Enabling the asynchronous threads for other phases

2014-02-16 Thread Chuansheng Liu
Hello,

This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.

Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.

With these patches, in my test platform, it saved 80% time in resume_noirq
phase.

Best Regards,

---
V2:
 -- Based on Rafael's minor changes related to coding style, white space etc;
 -- Rafael pointed out the v1 series break the device parent-children
suspending/resuming order when enabling asyncing, here using the
dev completion to sync parent-children order;

---
[PATCH 1/5] PM / sleep: Two flags for async suspend_noirq and
[PATCH 2/5] PM / sleep: Asynchronous threads for resume_noirq
[PATCH 3/5] PM / sleep: Asynchronous threads for resume_early
[PATCH 4/5] PM / sleep: Asynchronous threads for suspend_noirq
[PATCH 5/5] PM / sleep: Asynchronous threads for suspend_late

 drivers/base/power/main.c | 273 
++
 include/linux/pm.h|   2 ++
 2 files changed, 225 insertions(+), 50 deletions(-)


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 0/5] Enabling the asynchronous threads for other phases

2014-02-16 Thread Chuansheng Liu
Hello,

This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.

Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.

With these patches, in my test platform, it saved 80% time in resume_noirq
phase.

Best Regards,

---
V2:
 -- Based on Rafael's minor changes related to coding style, white space etc;
 -- Rafael pointed out the v1 series break the device parent-children
suspending/resuming order when enabling asyncing, here using the
dev completion to sync parent-children order;

---
[PATCH 1/5] PM / sleep: Two flags for async suspend_noirq and
[PATCH 2/5] PM / sleep: Asynchronous threads for resume_noirq
[PATCH 3/5] PM / sleep: Asynchronous threads for resume_early
[PATCH 4/5] PM / sleep: Asynchronous threads for suspend_noirq
[PATCH 5/5] PM / sleep: Asynchronous threads for suspend_late

 drivers/base/power/main.c | 273 
++
 include/linux/pm.h|   2 ++
 2 files changed, 225 insertions(+), 50 deletions(-)


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 1/5] PM / sleep: Two flags for async suspend_noirq and suspend_late

2014-02-16 Thread Chuansheng Liu
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 24 ++--
 include/linux/pm.h|  2 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..00c53eb 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -91,6 +91,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev-power.is_prepared = false;
dev-power.is_suspended = false;
+   dev-power.is_noirq_suspended = false;
+   dev-power.is_late_suspended = false;
init_completion(dev-power.completion);
complete_all(dev-power.completion);
dev-power.wakeup = NULL;
@@ -479,6 +481,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_noirq_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = noirq power domain ;
callback = pm_noirq_op(dev-pm_domain-ops, state);
@@ -499,6 +504,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -561,6 +567,9 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_late_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = early power domain ;
callback = pm_late_early_op(dev-pm_domain-ops, state);
@@ -581,6 +590,7 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_late_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -917,6 +927,7 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
if (dev-power.syscore)
return 0;
@@ -940,7 +951,11 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
callback = pm_noirq_op(dev-driver-pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev-power.is_noirq_suspended = true;
+
+   return error;
 }
 
 /**
@@ -1003,6 +1018,7 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
__pm_runtime_disable(dev, false);
 
@@ -1028,7 +1044,11 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
callback = pm_late_early_op(dev-driver-pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev-power.is_late_suspended = true;
+
+   return error;
 }
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a..f23a4f1 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -542,6 +542,8 @@ struct dev_pm_info {
unsigned intasync_suspend:1;
boolis_prepared:1;  /* Owned by the PM core */
boolis_suspended:1; /* Ditto */
+   boolis_noirq_suspended:1;
+   boolis_late_suspended:1;
boolignore_children:1;
boolearly_init:1;   /* Owned by the PM core */
spinlock_t  lock;
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 4/5] PM / sleep: Asynchronous threads for suspend_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.

This patch is for suspend_noirq phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 68 +++
 1 file changed, 57 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 2f2d110..72b4c9c 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -990,14 +990,24 @@ static pm_message_t resume_event(pm_message_t sleep_state)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_suspend_noirq(struct device *dev, pm_message_t state)
+static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   dpm_wait_for_children(dev, async);
+
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
 
if (dev-power.syscore)
-   return 0;
+   goto Complete;
 
if (dev-pm_domain) {
info = noirq power domain ;
@@ -1021,10 +1031,40 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev-power.is_noirq_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_suspend_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_noirq(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition,  async, error);
+   }
+
+   put_device(dev);
+}
+
+static int device_suspend_noirq(struct device *dev)
+{
+   reinit_completion(dev-power.completion);
+
+   if (pm_async_enabled  dev-power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_noirq, dev);
+   return 0;
+   }
+   return __device_suspend_noirq(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_noirq - Execute noirq suspend callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1040,19 +1080,20 @@ static int dpm_suspend_noirq(pm_message_t state)
cpuidle_pause();
suspend_device_irqs();
mutex_lock(dpm_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(dpm_late_early_list)) {
struct device *dev = to_device(dpm_late_early_list.prev);
 
get_device(dev);
mutex_unlock(dpm_list_mtx);
 
-   error = device_suspend_noirq(dev, state);
+   error = device_suspend_noirq(dev);
 
mutex_lock(dpm_list_mtx);
if (error) {
pm_dev_err(dev, state,  noirq, error);
-   suspend_stats.failed_suspend_noirq++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1061,16 +1102,21 @@ static int dpm_suspend_noirq(pm_message_t state)
list_move(dev-power.entry, dpm_noirq_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(dpm_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (!error)
+   error = async_error;
+
+   if (error) {
+   suspend_stats.failed_suspend_noirq++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, noirq);
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 3/5] PM / sleep: Asynchronous threads for resume_early

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.

This patch is for resume_early phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 55 +--
 1 file changed, 44 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 89172aa..2f2d110 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -595,7 +595,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int device_resume_early(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -607,6 +607,8 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   dpm_wait(dev-parent, async);
+
if (!dev-power.is_late_suspended)
goto Out;
 
@@ -636,38 +638,69 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
TRACE_RESUME(error);
 
pm_runtime_enable(dev);
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_resume_early(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_early(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition,  async, error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_early - Execute early resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
 static void dpm_resume_early(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(dpm_list_mtx);
-   while (!list_empty(dpm_late_early_list)) {
-   struct device *dev = to_device(dpm_late_early_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, dpm_late_early_list, power.entry) {
+   reinit_completion(dev-power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_early, dev);
+   }
+   }
 
+   while (!list_empty(dpm_late_early_list)) {
+   dev = to_device(dpm_late_early_list.next);
get_device(dev);
list_move_tail(dev-power.entry, dpm_suspended_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_early(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_early++;
-   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  early, error);
-   }
+   if (!is_async(dev)) {
+   int error;
 
+   error = device_resume_early(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_early++;
+   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  early, error);
+   }
+   }
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, early);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 2/5] PM / sleep: Asynchronous threads for resume_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_noirq time
significantly.

One typical case is:
In resume_noirq phase and for the PCI devices, the function
pci_pm_resume_noirq() will be called, and there is one d3_delay
(10ms) at least.

With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 66 +++
 1 file changed, 50 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 00c53eb..89172aa 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -469,7 +469,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int device_resume_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -481,6 +481,8 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   dpm_wait(dev-parent, async);
+
if (!dev-power.is_noirq_suspended)
goto Out;
 
@@ -507,10 +509,29 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
dev-power.is_noirq_suspended = false;
 
  Out:
+   complete_all(dev-power.completion);
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev-power.async_suspend  pm_async_enabled
+!pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition,  async, error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute noirq resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -520,29 +541,48 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(dpm_list_mtx);
-   while (!list_empty(dpm_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, dpm_noirq_list, power.entry) {
+   reinit_completion(dev-power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
+   while (!list_empty(dpm_noirq_list)) {
+   dev = to_device(dpm_noirq_list.next);
get_device(dev);
list_move_tail(dev-power.entry, dpm_late_early_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  noirq, error);
+   if (!is_async(dev)) {
+   int error;
+
+   error = device_resume_noirq(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  noirq, error);
+   }
}
 
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, noirq);
resume_device_irqs();
cpuidle_resume();
@@ -742,12 +782,6 @@ static void async_resume(void *data, async_cookie_t cookie)
put_device(dev);
 }
 
-static bool is_async(struct device *dev)
-{
-   return dev-power.async_suspend  pm_async_enabled
-!pm_trace_is_enabled();
-}
-
 /**
  * dpm_resume - Execute resume callbacks for non-sysdev devices.
  * @state: PM transition of the system being carried out.
-- 
1.9.rc0

--
To unsubscribe from

[PATCH v2 5/5] PM / sleep: Asynchronous threads for suspend_late

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.

This patch is for suspend_late phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 64 ++-
 1 file changed, 52 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 72b4c9c..0c5fad0 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1127,16 +1127,24 @@ static int dpm_suspend_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_suspend_late(struct device *dev, pm_message_t state)
+static int __device_suspend_late(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
 
__pm_runtime_disable(dev, false);
 
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
+
if (dev-power.syscore)
-   return 0;
+   goto Complete;
 
if (dev-pm_domain) {
info = late power domain ;
@@ -1160,10 +1168,40 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev-power.is_late_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_suspend_late(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_late(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition,  async, error);
+   }
+   put_device(dev);
+}
+
+static int device_suspend_late(struct device *dev)
+{
+   reinit_completion(dev-power.completion);
+
+   if (pm_async_enabled  dev-power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_late, dev);
+   return 0;
+   }
+
+   return __device_suspend_late(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_late - Execute late suspend callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1174,19 +1212,20 @@ static int dpm_suspend_late(pm_message_t state)
int error = 0;
 
mutex_lock(dpm_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(dpm_suspended_list)) {
struct device *dev = to_device(dpm_suspended_list.prev);
 
get_device(dev);
mutex_unlock(dpm_list_mtx);
 
-   error = device_suspend_late(dev, state);
+   error = device_suspend_late(dev);
 
mutex_lock(dpm_list_mtx);
if (error) {
pm_dev_err(dev, state,  late, error);
-   suspend_stats.failed_suspend_late++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1195,17 +1234,18 @@ static int dpm_suspend_late(pm_message_t state)
list_move(dev-power.entry, dpm_late_early_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(dpm_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (error) {
+   suspend_stats.failed_suspend_late++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, late);
-
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 0/5] Enabling the asynchronous threads for other phases

2014-02-16 Thread Chuansheng Liu
Hello,

This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.

Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.

With these patches, in my test platform, it saved 80% time in resume_noirq
phase.

Best Regards,

---
V2:
 -- Based on Rafael's minor changes related to coding style, white space etc;
 -- Rafael pointed out the v1 series break the device parent-children
suspending/resuming order when enabling asyncing, here using the
dev completion to sync parent-children order;

V3:
 -- In patch v2 5/5, there is one missing dpm_wait_for_children;

---

[PATCH v3 1/5] PM / sleep: Two flags for async suspend_noirq and
[PATCH v3 2/5] PM / sleep: Asynchronous threads for resume_noirq
[PATCH v3 3/5] PM / sleep: Asynchronous threads for resume_early
[PATCH v3 4/5] PM / sleep: Asynchronous threads for suspend_noirq
[PATCH v3 5/5] PM / sleep: Asynchronous threads for suspend_late

 drivers/base/power/main.c | 275 
++
 include/linux/pm.h|   2 ++
 2 files changed, 227 insertions(+), 50 deletions(-)


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 4/5] PM / sleep: Asynchronous threads for suspend_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.

This patch is for suspend_noirq phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 68 +++
 1 file changed, 57 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 2f2d110..72b4c9c 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -990,14 +990,24 @@ static pm_message_t resume_event(pm_message_t sleep_state)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_suspend_noirq(struct device *dev, pm_message_t state)
+static int __device_suspend_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   dpm_wait_for_children(dev, async);
+
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
 
if (dev-power.syscore)
-   return 0;
+   goto Complete;
 
if (dev-pm_domain) {
info = noirq power domain ;
@@ -1021,10 +1031,40 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev-power.is_noirq_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_suspend_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_noirq(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition,  async, error);
+   }
+
+   put_device(dev);
+}
+
+static int device_suspend_noirq(struct device *dev)
+{
+   reinit_completion(dev-power.completion);
+
+   if (pm_async_enabled  dev-power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_noirq, dev);
+   return 0;
+   }
+   return __device_suspend_noirq(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_noirq - Execute noirq suspend callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1040,19 +1080,20 @@ static int dpm_suspend_noirq(pm_message_t state)
cpuidle_pause();
suspend_device_irqs();
mutex_lock(dpm_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(dpm_late_early_list)) {
struct device *dev = to_device(dpm_late_early_list.prev);
 
get_device(dev);
mutex_unlock(dpm_list_mtx);
 
-   error = device_suspend_noirq(dev, state);
+   error = device_suspend_noirq(dev);
 
mutex_lock(dpm_list_mtx);
if (error) {
pm_dev_err(dev, state,  noirq, error);
-   suspend_stats.failed_suspend_noirq++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1061,16 +1102,21 @@ static int dpm_suspend_noirq(pm_message_t state)
list_move(dev-power.entry, dpm_noirq_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(dpm_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (!error)
+   error = async_error;
+
+   if (error) {
+   suspend_stats.failed_suspend_noirq++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_NOIRQ);
dpm_resume_noirq(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, noirq);
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 1/5] PM / sleep: Two flags for async suspend_noirq and suspend_late

2014-02-16 Thread Chuansheng Liu
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 24 ++--
 include/linux/pm.h|  2 ++
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..00c53eb 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -91,6 +91,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev-power.is_prepared = false;
dev-power.is_suspended = false;
+   dev-power.is_noirq_suspended = false;
+   dev-power.is_late_suspended = false;
init_completion(dev-power.completion);
complete_all(dev-power.completion);
dev-power.wakeup = NULL;
@@ -479,6 +481,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_noirq_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = noirq power domain ;
callback = pm_noirq_op(dev-pm_domain-ops, state);
@@ -499,6 +504,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -561,6 +567,9 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_late_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = early power domain ;
callback = pm_late_early_op(dev-pm_domain-ops, state);
@@ -581,6 +590,7 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_late_suspended = false;
 
  Out:
TRACE_RESUME(error);
@@ -917,6 +927,7 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
if (dev-power.syscore)
return 0;
@@ -940,7 +951,11 @@ static int device_suspend_noirq(struct device *dev, 
pm_message_t state)
callback = pm_noirq_op(dev-driver-pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev-power.is_noirq_suspended = true;
+
+   return error;
 }
 
 /**
@@ -1003,6 +1018,7 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
+   int error;
 
__pm_runtime_disable(dev, false);
 
@@ -1028,7 +1044,11 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
callback = pm_late_early_op(dev-driver-pm, state);
}
 
-   return dpm_run_callback(callback, dev, state, info);
+   error = dpm_run_callback(callback, dev, state, info);
+   if (!error)
+   dev-power.is_late_suspended = true;
+
+   return error;
 }
 
 /**
diff --git a/include/linux/pm.h b/include/linux/pm.h
index 8c6583a..f23a4f1 100644
--- a/include/linux/pm.h
+++ b/include/linux/pm.h
@@ -542,6 +542,8 @@ struct dev_pm_info {
unsigned intasync_suspend:1;
boolis_prepared:1;  /* Owned by the PM core */
boolis_suspended:1; /* Ditto */
+   boolis_noirq_suspended:1;
+   boolis_late_suspended:1;
boolignore_children:1;
boolearly_init:1;   /* Owned by the PM core */
spinlock_t  lock;
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 2/5] PM / sleep: Asynchronous threads for resume_noirq

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_noirq time
significantly.

One typical case is:
In resume_noirq phase and for the PCI devices, the function
pci_pm_resume_noirq() will be called, and there is one d3_delay
(10ms) at least.

With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 66 +++
 1 file changed, 50 insertions(+), 16 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 00c53eb..89172aa 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -469,7 +469,7 @@ static void dpm_watchdog_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int device_resume_noirq(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -481,6 +481,8 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   dpm_wait(dev-parent, async);
+
if (!dev-power.is_noirq_suspended)
goto Out;
 
@@ -507,10 +509,29 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
dev-power.is_noirq_suspended = false;
 
  Out:
+   complete_all(dev-power.completion);
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev-power.async_suspend  pm_async_enabled
+!pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition,  async, error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute noirq resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -520,29 +541,48 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(dpm_list_mtx);
-   while (!list_empty(dpm_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, dpm_noirq_list, power.entry) {
+   reinit_completion(dev-power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
+   while (!list_empty(dpm_noirq_list)) {
+   dev = to_device(dpm_noirq_list.next);
get_device(dev);
list_move_tail(dev-power.entry, dpm_late_early_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  noirq, error);
+   if (!is_async(dev)) {
+   int error;
+
+   error = device_resume_noirq(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  noirq, error);
+   }
}
 
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, noirq);
resume_device_irqs();
cpuidle_resume();
@@ -742,12 +782,6 @@ static void async_resume(void *data, async_cookie_t cookie)
put_device(dev);
 }
 
-static bool is_async(struct device *dev)
-{
-   return dev-power.async_suspend  pm_async_enabled
-!pm_trace_is_enabled();
-}
-
 /**
  * dpm_resume - Execute resume callbacks for non-sysdev devices.
  * @state: PM transition of the system being carried out.
-- 
1.9.rc0

--
To unsubscribe from

[PATCH v3 5/5] PM / sleep: Asynchronous threads for suspend_late

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.

This patch is for suspend_late phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 66 ++-
 1 file changed, 54 insertions(+), 12 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 72b4c9c..c031050 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1127,16 +1127,26 @@ static int dpm_suspend_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_suspend_late(struct device *dev, pm_message_t state)
+static int __device_suspend_late(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
-   int error;
+   int error = 0;
+
+   dpm_wait_for_children(dev, async);
 
__pm_runtime_disable(dev, false);
 
+   if (async_error)
+   goto Complete;
+
+   if (pm_wakeup_pending()) {
+   async_error = -EBUSY;
+   goto Complete;
+   }
+
if (dev-power.syscore)
-   return 0;
+   goto Complete;
 
if (dev-pm_domain) {
info = late power domain ;
@@ -1160,10 +1170,40 @@ static int device_suspend_late(struct device *dev, 
pm_message_t state)
error = dpm_run_callback(callback, dev, state, info);
if (!error)
dev-power.is_late_suspended = true;
+   else
+   async_error = error;
 
+Complete:
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_suspend_late(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_suspend_late(dev, pm_transition, true);
+   if (error) {
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, pm_transition,  async, error);
+   }
+   put_device(dev);
+}
+
+static int device_suspend_late(struct device *dev)
+{
+   reinit_completion(dev-power.completion);
+
+   if (pm_async_enabled  dev-power.async_suspend) {
+   get_device(dev);
+   async_schedule(async_suspend_late, dev);
+   return 0;
+   }
+
+   return __device_suspend_late(dev, pm_transition, false);
+}
+
 /**
  * dpm_suspend_late - Execute late suspend callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -1174,19 +1214,20 @@ static int dpm_suspend_late(pm_message_t state)
int error = 0;
 
mutex_lock(dpm_list_mtx);
+   pm_transition = state;
+   async_error = 0;
+
while (!list_empty(dpm_suspended_list)) {
struct device *dev = to_device(dpm_suspended_list.prev);
 
get_device(dev);
mutex_unlock(dpm_list_mtx);
 
-   error = device_suspend_late(dev, state);
+   error = device_suspend_late(dev);
 
mutex_lock(dpm_list_mtx);
if (error) {
pm_dev_err(dev, state,  late, error);
-   suspend_stats.failed_suspend_late++;
-   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
@@ -1195,17 +1236,18 @@ static int dpm_suspend_late(pm_message_t state)
list_move(dev-power.entry, dpm_late_early_list);
put_device(dev);
 
-   if (pm_wakeup_pending()) {
-   error = -EBUSY;
+   if (async_error)
break;
-   }
}
mutex_unlock(dpm_list_mtx);
-   if (error)
+   async_synchronize_full();
+   if (error) {
+   suspend_stats.failed_suspend_late++;
+   dpm_save_failed_step(SUSPEND_SUSPEND_LATE);
dpm_resume_early(resume_event(state));
-   else
+   } else {
dpm_show_time(starttime, state, late);
-
+   }
return error;
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v3 3/5] PM / sleep: Asynchronous threads for resume_early

2014-02-16 Thread Chuansheng Liu
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.

This patch is for resume_early phase.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 drivers/base/power/main.c | 55 +--
 1 file changed, 44 insertions(+), 11 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 89172aa..2f2d110 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -595,7 +595,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int device_resume_early(struct device *dev, pm_message_t state, bool 
async)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -607,6 +607,8 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   dpm_wait(dev-parent, async);
+
if (!dev-power.is_late_suspended)
goto Out;
 
@@ -636,38 +638,69 @@ static int device_resume_early(struct device *dev, 
pm_message_t state)
TRACE_RESUME(error);
 
pm_runtime_enable(dev);
+   complete_all(dev-power.completion);
return error;
 }
 
+static void async_resume_early(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_early(dev, pm_transition, true);
+   if (error)
+   pm_dev_err(dev, pm_transition,  async, error);
+
+   put_device(dev);
+}
+
 /**
  * dpm_resume_early - Execute early resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
  */
 static void dpm_resume_early(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
 
mutex_lock(dpm_list_mtx);
-   while (!list_empty(dpm_late_early_list)) {
-   struct device *dev = to_device(dpm_late_early_list.next);
-   int error;
+   pm_transition = state;
+
+   /*
+* Advanced the async threads upfront,
+* in case the starting of async threads is
+* delayed by non-async resuming devices.
+*/
+   list_for_each_entry(dev, dpm_late_early_list, power.entry) {
+   reinit_completion(dev-power.completion);
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_early, dev);
+   }
+   }
 
+   while (!list_empty(dpm_late_early_list)) {
+   dev = to_device(dpm_late_early_list.next);
get_device(dev);
list_move_tail(dev-power.entry, dpm_suspended_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_early(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_early++;
-   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  early, error);
-   }
+   if (!is_async(dev)) {
+   int error;
 
+   error = device_resume_early(dev, state, false);
+   if (error) {
+   suspend_stats.failed_resume_early++;
+   dpm_save_failed_step(SUSPEND_RESUME_EARLY);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  early, error);
+   }
+   }
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, early);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/2] genirq: Fix the possible synchronize_irq() wait-forever

2014-02-10 Thread Chuansheng Liu
There is below race between irq handler and irq thread:
irq handler irq thread

irq_wake_thread()   irq_thread()
  set bit RUNTHREAD
  ...clear bit RUNTHREAD
 thread_fn()
 [A]test_and_decrease
   thread_active
  [B]increase thread_active

If action A is before action B, after that the thread_active
will be always > 0, and for synchronize_irq() calling, which
will be waiting there forever.

Here put the increasing thread-active before setting bit
RUNTHREAD, which can resolve such race.

Signed-off-by: xiaoming wang 
Signed-off-by: Chuansheng Liu 
---
 kernel/irq/handle.c | 21 -
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 131ca17..5f9fbb7 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -65,7 +65,7 @@ static void irq_wake_thread(struct irq_desc *desc, struct 
irqaction *action)
 * Wake up the handler thread for this action. If the
 * RUNTHREAD bit is already set, nothing to do.
 */
-   if (test_and_set_bit(IRQTF_RUNTHREAD, >thread_flags))
+   if (test_bit(IRQTF_RUNTHREAD, >thread_flags))
return;
 
/*
@@ -126,6 +126,25 @@ static void irq_wake_thread(struct irq_desc *desc, struct 
irqaction *action)
 */
atomic_inc(>threads_active);
 
+   /*
+* set the RUNTHREAD bit after increasing the threads_active,
+* it can avoid the below race:
+* irq handler  irq thread in case it is in
+*running state
+*
+*  set RUNTHREAD bit
+*  clear the RUNTHREAD bit
+*...   thread_fn()
+*
+*  due to threads_active==0,
+*  no waking up wait_for_threads
+*
+* threads_active ++
+* After that, the threads_active will be always > 0, which
+* will block the synchronize_irq().
+*/
+   set_bit(IRQTF_RUNTHREAD, >thread_flags);
+
wake_up_process(action->thread);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/2] genirq: Fix one typo chasnge

2014-02-10 Thread Chuansheng Liu
Change the comment "chasnge" to "change".

Signed-off-by: Chuansheng Liu 
---
 kernel/irq/manage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..4802295 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -727,7 +727,7 @@ out_unlock:
 
 #ifdef CONFIG_SMP
 /*
- * Check whether we need to chasnge the affinity of the interrupt thread.
+ * Check whether we need to change the affinity of the interrupt thread.
  */
 static void
 irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
-- 
1.9.rc0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/2] genirq: Fix one typo chasnge

2014-02-10 Thread Chuansheng Liu
Change the comment chasnge to change.

Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 kernel/irq/manage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..4802295 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -727,7 +727,7 @@ out_unlock:
 
 #ifdef CONFIG_SMP
 /*
- * Check whether we need to chasnge the affinity of the interrupt thread.
+ * Check whether we need to change the affinity of the interrupt thread.
  */
 static void
 irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/2] genirq: Fix the possible synchronize_irq() wait-forever

2014-02-10 Thread Chuansheng Liu
There is below race between irq handler and irq thread:
irq handler irq thread

irq_wake_thread()   irq_thread()
  set bit RUNTHREAD
  ...clear bit RUNTHREAD
 thread_fn()
 [A]test_and_decrease
   thread_active
  [B]increase thread_active

If action A is before action B, after that the thread_active
will be always  0, and for synchronize_irq() calling, which
will be waiting there forever.

Here put the increasing thread-active before setting bit
RUNTHREAD, which can resolve such race.

Signed-off-by: xiaoming wang xiaoming.w...@intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
 kernel/irq/handle.c | 21 -
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 131ca17..5f9fbb7 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -65,7 +65,7 @@ static void irq_wake_thread(struct irq_desc *desc, struct 
irqaction *action)
 * Wake up the handler thread for this action. If the
 * RUNTHREAD bit is already set, nothing to do.
 */
-   if (test_and_set_bit(IRQTF_RUNTHREAD, action-thread_flags))
+   if (test_bit(IRQTF_RUNTHREAD, action-thread_flags))
return;
 
/*
@@ -126,6 +126,25 @@ static void irq_wake_thread(struct irq_desc *desc, struct 
irqaction *action)
 */
atomic_inc(desc-threads_active);
 
+   /*
+* set the RUNTHREAD bit after increasing the threads_active,
+* it can avoid the below race:
+* irq handler  irq thread in case it is in
+*running state
+*
+*  set RUNTHREAD bit
+*  clear the RUNTHREAD bit
+*...   thread_fn()
+*
+*  due to threads_active==0,
+*  no waking up wait_for_threads
+*
+* threads_active ++
+* After that, the threads_active will be always  0, which
+* will block the synchronize_irq().
+*/
+   set_bit(IRQTF_RUNTHREAD, action-thread_flags);
+
wake_up_process(action-thread);
 }
 
-- 
1.9.rc0

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V2] PM: Enable asynchronous threads for suspend/resume_noirq/late/early phases

2014-01-15 Thread Chuansheng Liu

Current code has implemented asynchronous threads for dpm_suspend()
and dpm_resume(), which saved much time.

As commit 5af84b82701a and 97df8c12995 said, the total time can be
reduced significantly by running suspend and resume callbacks of
device drivers in parallel with each other.

For the suspend_late/suspend_noirq/resume_noirq/resume_early phases,
sometimes they often taken much time without asynchronous threads,
following the idea of the above commits, implemented it with async
threads too.

One example below for my test platform:
Without this patch:
[ 1411.272218] PM: noirq resume of devices complete after 92.223 msecs
with this patch:
[  110.616735] PM: noirq resume of devices complete after 10.544 msecs

Normally 80% time is saved, which is helpful for the user experience
specially for mobile platform.

Signed-off-by: Liu, Chuansheng 
---
 drivers/base/power/main.c |  189 +++--
 include/linux/pm.h|2 +
 2 files changed, 166 insertions(+), 25 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 6a33dd8..62648b2 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -72,6 +72,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev->power.is_prepared = false;
dev->power.is_suspended = false;
+   dev->power.is_late_suspended = false;
+   dev->power.is_noirq_suspended = false;
init_completion(>power.completion);
complete_all(>power.completion);
dev->power.wakeup = NULL;
@@ -452,7 +454,7 @@ static void dpm_wd_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int __device_resume_noirq(struct device *dev, pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -464,6 +466,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev->power.syscore)
goto Out;
 
+   if (!dev->power.is_noirq_suspended)
+   goto Out;
+
if (dev->pm_domain) {
info = "noirq power domain ";
callback = pm_noirq_op(>pm_domain->ops, state);
@@ -484,12 +489,41 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev->power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev->power.async_suspend && pm_async_enabled
+   && !pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_resume_noirq(dev, pm_transition);
+   if (error)
+   pm_dev_err(dev, pm_transition, " noirq", error);
+   put_device(dev);
+}
+
+static int device_resume_noirq(struct device *dev)
+{
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   return 0;
+   }
+
+   return __device_resume_noirq(dev, pm_transition);
+}
+
 /**
  * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -500,6 +534,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
 static void dpm_resume_noirq(pm_message_t state)
 {
ktime_t starttime = ktime_get();
+   pm_transition = state;
 
mutex_lock(_list_mtx);
while (!list_empty(_noirq_list)) {
@@ -510,18 +545,18 @@ static void dpm_resume_noirq(pm_message_t state)
list_move_tail(>power.entry, _late_early_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_noirq(dev, state);
+   error = device_resume_noirq(dev);
if (error) {
suspend_stats.failed_resume_noirq++;
dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state, " noirq", error);
}
-
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "noirq");
resume_device_irqs();
cpuidle_resume();
@@ -534,7 +569,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int __device_resume_early(struct device *dev, pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -546,6 +581,9 @@ static int 

[PATCH V2] PM: Enable asynchronous threads for suspend/resume_noirq/late/early phases

2014-01-15 Thread Chuansheng Liu

Current code has implemented asynchronous threads for dpm_suspend()
and dpm_resume(), which saved much time.

As commit 5af84b82701a and 97df8c12995 said, the total time can be
reduced significantly by running suspend and resume callbacks of
device drivers in parallel with each other.

For the suspend_late/suspend_noirq/resume_noirq/resume_early phases,
sometimes they often taken much time without asynchronous threads,
following the idea of the above commits, implemented it with async
threads too.

One example below for my test platform:
Without this patch:
[ 1411.272218] PM: noirq resume of devices complete after 92.223 msecs
with this patch:
[  110.616735] PM: noirq resume of devices complete after 10.544 msecs

Normally 80% time is saved, which is helpful for the user experience
specially for mobile platform.

Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
 drivers/base/power/main.c |  189 +++--
 include/linux/pm.h|2 +
 2 files changed, 166 insertions(+), 25 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 6a33dd8..62648b2 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -72,6 +72,8 @@ void device_pm_sleep_init(struct device *dev)
 {
dev-power.is_prepared = false;
dev-power.is_suspended = false;
+   dev-power.is_late_suspended = false;
+   dev-power.is_noirq_suspended = false;
init_completion(dev-power.completion);
complete_all(dev-power.completion);
dev-power.wakeup = NULL;
@@ -452,7 +454,7 @@ static void dpm_wd_clear(struct dpm_watchdog *wd)
  * The driver of @dev will not receive interrupts while this function is being
  * executed.
  */
-static int device_resume_noirq(struct device *dev, pm_message_t state)
+static int __device_resume_noirq(struct device *dev, pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ -464,6 +466,9 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
if (dev-power.syscore)
goto Out;
 
+   if (!dev-power.is_noirq_suspended)
+   goto Out;
+
if (dev-pm_domain) {
info = noirq power domain ;
callback = pm_noirq_op(dev-pm_domain-ops, state);
@@ -484,12 +489,41 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
}
 
error = dpm_run_callback(callback, dev, state, info);
+   dev-power.is_noirq_suspended = false;
 
  Out:
TRACE_RESUME(error);
return error;
 }
 
+static bool is_async(struct device *dev)
+{
+   return dev-power.async_suspend  pm_async_enabled
+!pm_trace_is_enabled();
+}
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = __device_resume_noirq(dev, pm_transition);
+   if (error)
+   pm_dev_err(dev, pm_transition,  noirq, error);
+   put_device(dev);
+}
+
+static int device_resume_noirq(struct device *dev)
+{
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   return 0;
+   }
+
+   return __device_resume_noirq(dev, pm_transition);
+}
+
 /**
  * dpm_resume_noirq - Execute noirq resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -500,6 +534,7 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
 static void dpm_resume_noirq(pm_message_t state)
 {
ktime_t starttime = ktime_get();
+   pm_transition = state;
 
mutex_lock(dpm_list_mtx);
while (!list_empty(dpm_noirq_list)) {
@@ -510,18 +545,18 @@ static void dpm_resume_noirq(pm_message_t state)
list_move_tail(dev-power.entry, dpm_late_early_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_noirq(dev, state);
+   error = device_resume_noirq(dev);
if (error) {
suspend_stats.failed_resume_noirq++;
dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
dpm_save_failed_dev(dev_name(dev));
pm_dev_err(dev, state,  noirq, error);
}
-
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, noirq);
resume_device_irqs();
cpuidle_resume();
@@ -534,7 +569,7 @@ static void dpm_resume_noirq(pm_message_t state)
  *
  * Runtime PM is disabled for @dev while this function is being executed.
  */
-static int device_resume_early(struct device *dev, pm_message_t state)
+static int __device_resume_early(struct device *dev, pm_message_t state)
 {
pm_callback_t callback = NULL;
char *info = NULL;
@@ 

[PATCH] PM: Enable asynchronous noirq resume threads to save the resuming time

2014-01-13 Thread Chuansheng Liu

Currently, the dpm_resume_noirq() is done synchronously, and for PCI devices
pci_pm_resume_noirq():

pci_pm_resume_noirq()
 pci_pm_default_resume_early()
  pci_power_up()
   pci_raw_set_power_state()
Which set the device from D3hot to D0 mostly, for every device, there will
be one 10ms(pci_pm_d3_delay) to wait.

Hence normally dpm_resume_noirq() will cost > 100ms, which is bigger for mobile
platform.

Here implementing it with asynchronous way which will reduce much.

For example below, The 80% time is saved.
With synchronous way:
[ 1411.272218] PM: noirq resume of devices complete after 92.223 msecs
With asynchronous way:
[  110.616735] PM: noirq resume of devices complete after 10.544 msecs

Signed-off-by: Liu, Chuansheng 
---
 drivers/base/power/main.c |   42 +-
 1 file changed, 33 insertions(+), 9 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..1b9d774 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -505,6 +505,19 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
return error;
 }
 
+static bool is_async(struct device *dev);
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition);
+   if (error)
+   pm_dev_err(dev, pm_transition, " noirq", error);
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute "noirq resume" callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -514,29 +527,40 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
+   pm_transition = state;
+
+   list_for_each_entry(dev, _noirq_list, power.entry) {
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
mutex_lock(_list_mtx);
while (!list_empty(_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   dev = to_device(dpm_noirq_list.next);
 
get_device(dev);
list_move_tail(>power.entry, _late_early_list);
mutex_unlock(_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state, " noirq", error);
+   if (!is_async(dev)) {
+   int error;
+   error = device_resume_noirq(dev, state);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state, " noirq", error);
+   }
}
-
mutex_lock(_list_mtx);
put_device(dev);
}
mutex_unlock(_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, "noirq");
resume_device_irqs();
cpuidle_resume();
-- 
1.7.9.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] PM: Enable asynchronous noirq resume threads to save the resuming time

2014-01-13 Thread Chuansheng Liu

Currently, the dpm_resume_noirq() is done synchronously, and for PCI devices
pci_pm_resume_noirq():

pci_pm_resume_noirq()
 pci_pm_default_resume_early()
  pci_power_up()
   pci_raw_set_power_state()
Which set the device from D3hot to D0 mostly, for every device, there will
be one 10ms(pci_pm_d3_delay) to wait.

Hence normally dpm_resume_noirq() will cost  100ms, which is bigger for mobile
platform.

Here implementing it with asynchronous way which will reduce much.

For example below, The 80% time is saved.
With synchronous way:
[ 1411.272218] PM: noirq resume of devices complete after 92.223 msecs
With asynchronous way:
[  110.616735] PM: noirq resume of devices complete after 10.544 msecs

Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
 drivers/base/power/main.c |   42 +-
 1 file changed, 33 insertions(+), 9 deletions(-)

diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 1b41fca..1b9d774 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -505,6 +505,19 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
return error;
 }
 
+static bool is_async(struct device *dev);
+
+static void async_resume_noirq(void *data, async_cookie_t cookie)
+{
+   struct device *dev = (struct device *)data;
+   int error;
+
+   error = device_resume_noirq(dev, pm_transition);
+   if (error)
+   pm_dev_err(dev, pm_transition,  noirq, error);
+   put_device(dev);
+}
+
 /**
  * dpm_resume_noirq - Execute noirq resume callbacks for all devices.
  * @state: PM transition of the system being carried out.
@@ -514,29 +527,40 @@ static int device_resume_noirq(struct device *dev, 
pm_message_t state)
  */
 static void dpm_resume_noirq(pm_message_t state)
 {
+   struct device *dev;
ktime_t starttime = ktime_get();
+   pm_transition = state;
+
+   list_for_each_entry(dev, dpm_noirq_list, power.entry) {
+   if (is_async(dev)) {
+   get_device(dev);
+   async_schedule(async_resume_noirq, dev);
+   }
+   }
 
mutex_lock(dpm_list_mtx);
while (!list_empty(dpm_noirq_list)) {
-   struct device *dev = to_device(dpm_noirq_list.next);
-   int error;
+   dev = to_device(dpm_noirq_list.next);
 
get_device(dev);
list_move_tail(dev-power.entry, dpm_late_early_list);
mutex_unlock(dpm_list_mtx);
 
-   error = device_resume_noirq(dev, state);
-   if (error) {
-   suspend_stats.failed_resume_noirq++;
-   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
-   dpm_save_failed_dev(dev_name(dev));
-   pm_dev_err(dev, state,  noirq, error);
+   if (!is_async(dev)) {
+   int error;
+   error = device_resume_noirq(dev, state);
+   if (error) {
+   suspend_stats.failed_resume_noirq++;
+   dpm_save_failed_step(SUSPEND_RESUME_NOIRQ);
+   dpm_save_failed_dev(dev_name(dev));
+   pm_dev_err(dev, state,  noirq, error);
+   }
}
-
mutex_lock(dpm_list_mtx);
put_device(dev);
}
mutex_unlock(dpm_list_mtx);
+   async_synchronize_full();
dpm_show_time(starttime, state, noirq);
resume_device_irqs();
cpuidle_resume();
-- 
1.7.9.5



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] dm snapshot: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()

2014-01-07 Thread Chuansheng Liu

In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to
call destroy_work_on_stack() which frees the debug object to pair
with INIT_WORK_ONSTACK().

Signed-off-by: Liu, Chuansheng 
---
 drivers/md/dm-snap-persistent.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index 2d2b1b7..2f5a9f8 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -257,6 +257,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t 
chunk, int rw,
INIT_WORK_ONSTACK(, do_metadata);
queue_work(ps->metadata_wq, );
flush_workqueue(ps->metadata_wq);
+   destroy_work_on_stack();
 
return req.result;
 }
-- 
1.7.9.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] workqueue: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()

2014-01-07 Thread Chuansheng Liu

In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to
call destroy_work_on_stack() which frees the debug object to pair
with INIT_WORK_ONSTACK().

Signed-off-by: Liu, Chuansheng 
---
 kernel/workqueue.c |2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b010eac..82ef9f3 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4789,6 +4789,7 @@ static int workqueue_cpu_down_callback(struct 
notifier_block *nfb,
 
/* wait for per-cpu unbinding to finish */
flush_work(_work);
+   destroy_work_on_stack(_work);
break;
}
return NOTIFY_OK;
@@ -4828,6 +4829,7 @@ long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
INIT_WORK_ONSTACK(, work_for_cpu_fn);
schedule_work_on(cpu, );
flush_work();
+   destroy_work_on_stack();
return wfc.ret;
 }
 EXPORT_SYMBOL_GPL(work_on_cpu);
-- 
1.7.9.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] xfs: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()

2014-01-07 Thread Chuansheng Liu

In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to
call destroy_work_on_stack() which frees the debug object to pair
with INIT_WORK_ONSTACK().

Signed-off-by: Liu, Chuansheng 
---
 fs/xfs/xfs_bmap_util.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 1394106..82e0dab 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -287,6 +287,7 @@ xfs_bmapi_allocate(
INIT_WORK_ONSTACK(>work, xfs_bmapi_allocate_worker);
queue_work(xfs_alloc_wq, >work);
wait_for_completion();
+   destroy_work_on_stack(>work);
return args->result;
 }
 
-- 
1.7.9.5



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 2/3] xfs: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()

2014-01-07 Thread Chuansheng Liu

In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to
call destroy_work_on_stack() which frees the debug object to pair
with INIT_WORK_ONSTACK().

Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
 fs/xfs/xfs_bmap_util.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index 1394106..82e0dab 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -287,6 +287,7 @@ xfs_bmapi_allocate(
INIT_WORK_ONSTACK(args-work, xfs_bmapi_allocate_worker);
queue_work(xfs_alloc_wq, args-work);
wait_for_completion(done);
+   destroy_work_on_stack(args-work);
return args-result;
 }
 
-- 
1.7.9.5



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] workqueue: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()

2014-01-07 Thread Chuansheng Liu

In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to
call destroy_work_on_stack() which frees the debug object to pair
with INIT_WORK_ONSTACK().

Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
 kernel/workqueue.c |2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b010eac..82ef9f3 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -4789,6 +4789,7 @@ static int workqueue_cpu_down_callback(struct 
notifier_block *nfb,
 
/* wait for per-cpu unbinding to finish */
flush_work(unbind_work);
+   destroy_work_on_stack(unbind_work);
break;
}
return NOTIFY_OK;
@@ -4828,6 +4829,7 @@ long work_on_cpu(int cpu, long (*fn)(void *), void *arg)
INIT_WORK_ONSTACK(wfc.work, work_for_cpu_fn);
schedule_work_on(cpu, wfc.work);
flush_work(wfc.work);
+   destroy_work_on_stack(wfc.work);
return wfc.ret;
 }
 EXPORT_SYMBOL_GPL(work_on_cpu);
-- 
1.7.9.5



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 3/3] dm snapshot: Calling destroy_work_on_stack() to pair with INIT_WORK_ONSTACK()

2014-01-07 Thread Chuansheng Liu

In case CONFIG_DEBUG_OBJECTS_WORK is defined, it is needed to
call destroy_work_on_stack() which frees the debug object to pair
with INIT_WORK_ONSTACK().

Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
 drivers/md/dm-snap-persistent.c |1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
index 2d2b1b7..2f5a9f8 100644
--- a/drivers/md/dm-snap-persistent.c
+++ b/drivers/md/dm-snap-persistent.c
@@ -257,6 +257,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t 
chunk, int rw,
INIT_WORK_ONSTACK(req.work, do_metadata);
queue_work(ps-metadata_wq, req.work);
flush_workqueue(ps-metadata_wq);
+   destroy_work_on_stack(req.work);
 
return req.result;
 }
-- 
1.7.9.5



--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] mutexes: Give more informative mutex warning in the !lock->owner case

2013-12-18 Thread tip-bot for Chuansheng Liu
Commit-ID:  91f30a17024ff0d8345e11228af33ee938b13426
Gitweb: http://git.kernel.org/tip/91f30a17024ff0d8345e11228af33ee938b13426
Author: Chuansheng Liu 
AuthorDate: Wed, 4 Dec 2013 13:58:13 +0800
Committer:  Ingo Molnar 
CommitDate: Tue, 17 Dec 2013 15:35:10 +0100

mutexes: Give more informative mutex warning in the !lock->owner case

When mutex debugging is enabled and an imbalanced mutex_unlock()
is called, we get the following, slightly confusing warning:

  [  364.208284] DEBUG_LOCKS_WARN_ON(lock->owner != current)

But in that case the warning is due to an imbalanced mutex_unlock() call,
and the lock->owner is NULL - so the message is misleading.

So improve the message by testing for this case specifically:

   DEBUG_LOCKS_WARN_ON(!lock->owner)

Signed-off-by: Liu, Chuansheng 
Signed-off-by: Peter Zijlstra 
Cc: Linus Torvalds 
Cc: Andrew Morton 
Cc: Thomas Gleixner 
Cc: Paul E. McKenney 
Link: http://lkml.kernel.org/r/1386136693.3650.48.camel@cliu38-desktop-build
[ Improved the changelog, changed the patch to use !lock->owner consistently. ]
Signed-off-by: Ingo Molnar 
---
 kernel/locking/mutex-debug.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index 7e3443f..faf6f5b 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -75,7 +75,12 @@ void debug_mutex_unlock(struct mutex *lock)
return;
 
DEBUG_LOCKS_WARN_ON(lock->magic != lock);
-   DEBUG_LOCKS_WARN_ON(lock->owner != current);
+
+   if (!lock->owner)
+   DEBUG_LOCKS_WARN_ON(!lock->owner);
+   else
+   DEBUG_LOCKS_WARN_ON(lock->owner != current);
+
DEBUG_LOCKS_WARN_ON(!lock->wait_list.prev && !lock->wait_list.next);
mutex_clear_owner(lock);
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[tip:core/locking] mutexes: Give more informative mutex warning in the !lock-owner case

2013-12-18 Thread tip-bot for Chuansheng Liu
Commit-ID:  91f30a17024ff0d8345e11228af33ee938b13426
Gitweb: http://git.kernel.org/tip/91f30a17024ff0d8345e11228af33ee938b13426
Author: Chuansheng Liu chuansheng@intel.com
AuthorDate: Wed, 4 Dec 2013 13:58:13 +0800
Committer:  Ingo Molnar mi...@kernel.org
CommitDate: Tue, 17 Dec 2013 15:35:10 +0100

mutexes: Give more informative mutex warning in the !lock-owner case

When mutex debugging is enabled and an imbalanced mutex_unlock()
is called, we get the following, slightly confusing warning:

  [  364.208284] DEBUG_LOCKS_WARN_ON(lock-owner != current)

But in that case the warning is due to an imbalanced mutex_unlock() call,
and the lock-owner is NULL - so the message is misleading.

So improve the message by testing for this case specifically:

   DEBUG_LOCKS_WARN_ON(!lock-owner)

Signed-off-by: Liu, Chuansheng chuansheng@intel.com
Signed-off-by: Peter Zijlstra pet...@infradead.org
Cc: Linus Torvalds torva...@linux-foundation.org
Cc: Andrew Morton a...@linux-foundation.org
Cc: Thomas Gleixner t...@linutronix.de
Cc: Paul E. McKenney paul...@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1386136693.3650.48.camel@cliu38-desktop-build
[ Improved the changelog, changed the patch to use !lock-owner consistently. ]
Signed-off-by: Ingo Molnar mi...@kernel.org
---
 kernel/locking/mutex-debug.c | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c
index 7e3443f..faf6f5b 100644
--- a/kernel/locking/mutex-debug.c
+++ b/kernel/locking/mutex-debug.c
@@ -75,7 +75,12 @@ void debug_mutex_unlock(struct mutex *lock)
return;
 
DEBUG_LOCKS_WARN_ON(lock-magic != lock);
-   DEBUG_LOCKS_WARN_ON(lock-owner != current);
+
+   if (!lock-owner)
+   DEBUG_LOCKS_WARN_ON(!lock-owner);
+   else
+   DEBUG_LOCKS_WARN_ON(lock-owner != current);
+
DEBUG_LOCKS_WARN_ON(!lock-wait_list.prev  !lock-wait_list.next);
mutex_clear_owner(lock);
 }
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


  1   2   3   >