We encounted one BUG_ON() issue at function __run_hrtimer(),
but the panic info is not enough to find out which hrtimer
users use the hrtimer wrongly.
(in this BUG_ON case, it is callback running at the same time
hrtimer_start() is executed on different context.)
We can print basic timer info
When do code reviewing, found no special requirement to
use spin_lock_irqsave/spin_unlock_irqrestore, because
alarmtimer_get_rtcdev() is called by posix clock interface.
So would like to use mutex to replace it.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
kernel/time/alarmtimer.c
Some posix clock interface directly use the variable rtcdev,
cleanup it here by alarmtimer_get_rtcdev().
Signed-off-by: liu chuansheng chuansheng@intel.com
---
kernel/time/alarmtimer.c | 13 ++---
1 files changed, 6 insertions(+), 7 deletions(-)
diff --git
When CONFIG_RTC_CLASS is not defined, implementing the POSIX clock
interface with null function is enough.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
kernel/time/alarmtimer.c | 78 ++---
1 files changed, 59 insertions(+), 19 deletions(-)
When one CPU is going offline, and fixup_irqs() will re-set the
irq affinity in some cases, we should clean the offlining CPU from
the irq affinity.
The reason is setting offlining CPU as of the affinity is useless.
Moreover, the smp_affinity value will be confusing when the
offlining CPU come
When one CPU is going offline, and fixup_irqs() will re-set the
irq affinity in some cases, we should clean the offlining CPU from
the irq affinity.
The reason is setting offlining CPU as of the affinity is useless.
Moreover, the smp_affinity value will be confusing when the
offlining CPU come
When one CPU is going offline, and fixup_irqs() will re-set the
irq affinity in some cases, we should clean the offlining CPU from
the irq affinity.
The reason is setting offlining CPU as of the affinity is useless.
Moreover, the smp_affinity value will be confusing when the
offlining CPU come
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/i2c/busses/i2c-nomadik.c | 14 --
1 files changed, 0 insertions(+), 14
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/video/omap2/dss/dispc.c |3 ---
1 files changed, 0 insertions(+), 3 deletions(-)
diff
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/video/omap2/dss/apply.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/net/wireless/ti/wlcore/ps.c |3 ---
1 files changed, 0 insertions(+), 3 deletions(-)
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/net/can/janz-ican3.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/video/mx3fb.c |7 +++
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git
The return value of wait_for_completion_timeout() is always
= 0 with unsigned int type.
So the condition ret 0 or ret = 0 is pointless.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/infiniband/ulp/srpt/ib_srpt.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
There is a race condition as below when calling request_firmware():
CPU1CPU2
write 0 loading
mutex_lock(fw_lock);
...
set_bit FW_STATUS_DONE class_timeout is coming
set_bit FW_STATUS_ABORT
complete_all
There is a race as below when calling request_firmware():
CPU1 CPU2
write 0 loading
mutex_lock(fw_lock)
...
set_bit FW_STATUS_DONE class_timeout is coming
set_bit FW_STATUS_ABORT
complete_all completion
...
There is a race as below when calling request_firmware():
CPU1 CPU2
write 0 loading
mutex_lock(fw_lock)
...
set_bit FW_STATUS_DONE class_timeout is coming
set_bit FW_STATUS_ABORT
complete_all completion
...
In the comments of function tick_sched_timer(), the sentence
timer-base-cpu_base-lock held is not right.
In function __run_hrtimer(), before call timer-function(),
the cpu_base-lock has been unlocked.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
kernel/time/tick-sched.c |2 +-
In our system, there is one edge interrupt, and we want it to be
irq thread with IRQS_ONESHOT, and found in handle_edge_irq(),
even with IRQS_ONESHOT, the irq is still unmasked without care of
flag IRQS_ONESHOT.
It causes IRQS_ONESHOT can not work well for edge interrupt, but also
after the irq
We encounted one BUG_ON() issue at function __run_hrtimer(),
but the panic info is not enough to find out which hrtimer
users use the hrtimer wrongly.
(in this BUG_ON case, it is callback running at the same time
hrtimer_start() is executed on different context.)
We can print basic timer info
Not all irq chips are IO-APIC chip.
In our system, there are many demux GPIO interrupts except for the
io-apic chip interrupts, and these GPIO interrupts are belonged
to other irq chips, the chip data is not type of struct irq_cfg
either.
But in function __setup_vector_irq(), it listed all
There is one race that both request_firmware() with the same
firmware name.
The race scenerio is as below:
CPU1 CPU2
request_firmware() --
_request_firmware_load() return err another
request_firmware() is coming --
There is one race that both request_firmware() with the same
firmware name.
The race scenerio is as below:
CPU1 CPU2
request_firmware() --
_request_firmware_load() return err another
request_firmware() is coming --
When one shared irq is requested as below:
req1: request_irq(irqnb, cb1, IRQF_SHARED, ...);
then:
req2: request_irq(irqnb, cb2, IRQF_SHARED|IRQF_NO_THREAD, ...);
Both req1 and req2 will be successful.
But if we passed threadirqs in commandline, and executing req1 and req2,
the req2 will fail.
There is one typical case will cause this issue, that is:
One io-apic interrupt with ONE_SHOT and threaded, when interrupt is coming:
handle_fasteoi_irq()
-- mask_irq(desc)
...
desc-irq_data.chip-irq_eoi(desc-irq_data)[ack_apic_level]
-- ioapic_irqd_mask()
...
When destroying the irq, before free cfg, need to check
cfg-irq_2_pin and free it when it is not NULL.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
arch/x86/kernel/apic/io_apic.c | 10 ++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git
When debugging our system issues related with __setup_vector_irq(),
found there is a real wrong code that:
for_each_active_irq(irq) {
cfg = irq_get_chip_data(irq);
if (!cfg)
continue;
These codes presume all allocated irqs are
When system try to do the suspend:
T1:suspend_thread T2: interrupt thread handler
enter_state() arizona_irq_thread()
suspend_devices_and_enter()regmap_read()
__device_suspend()
As Daniel suggested, I did some cleanup before setting the state_tables array
into __initdata.
Thanks your help to review them.
[PATCH 1/3] intel_idle: changing the continue to break in intel_idle_cpu_init()
[PATCH 2/3] intel_idle: Removing the redundant calculating for dev-state_count
[PATCH
According to commit e022e7eb9, the .enter == NULL is the last one in
state_tables[].
So just like intel_idle_cpuidle_driver_init(), in case of .enter == NULL,
breaking the for(;;) loop directly.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/idle/intel_idle.c |2 +-
1
In function intel_idle_cpu_init() and intel_idle_cpuidle_driver_init(),
they are having the same for(;;) loop.
Here in intel_idle_cpu_init(), the dev-state_count can be assigned by
drv-state_count directly.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/idle/intel_idle.c |
Currently, in intel_idle.c, there are 5 state_tables array, every
array size is sizeof(struct cpuidle_state) * CPUIDLE_STATE_MAX.
As in intel_idle_cpuidle_driver_init(), we have copied the data into
intel_idle_driver-state[], so do not need to keep state_tables[]
there any more after system
In function intel_idle_cpu_init() and intel_idle_cpuidle_driver_init(),
they are having the same for(;;) loop to count the -state_count.
Although intel_idle_cpu_init() can be called at runtime CPU HOTPLUG case,
but max_cstate can not be changed at runtime.
So the dev-state_count should be ==
Meet some issues when operating one ioapic chip irq which
is set with _NO_SUSPEND flag and calling irq_set_irq_wake().
Written the below patches, thanks your time to review.
[PATCH 1/3] x86, io_apic: Adding the flag IRQCHIP_SKIP_SET_WAKE
[PATCH 2/3] genirq: Do not consider the irqs with
Currently for an ioapic chip irq, if we call irq_set_irq_wake()
we will get the ENXIO returning error, but some drivers need the
wake-up interrupts pending mechanism.
Here adding the flag IRQCHIP_SKIP_SET_WAKE instead of emtpy callback.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
According to commit 9c6079aa1bf(genirq: Do not consider disabled
wakeup irqs), we should not break the suspend when one irq is pending
but has been disabled before suspending.
But there is another case missed, that one irq with flag IRQF_NO_SUSPEND,
which has been disabled before suspending, and
When one irq is setup with flag IRQF_NO_SUSPEND, it is pointless
to call irq_set_irq_wake().
Because check_wakeup_irqs() is just checking the irq which has pending
but is in IRQS_SUSPENDED state when do syscore_suspend().
Signed-off-by: liu chuansheng chuansheng@intel.com
---
According to commit 9c6079aa1bf(genirq: Do not consider disabled
wakeup irqs), we should not break the suspend when one interrupt has
been disabled before suspending and is pending there.
But there is another case missed:
If an interrupt which is marked IRQF_NO_SUSPEND has been disabled
before
Currently, in intel_idle.c, there are 5 state_tables array, every
array size is sizeof(struct cpuidle_state) * CPUIDLE_STATE_MAX.
But after intel_idle_probe(), just only one array is useful.
Here we can just define one static state_table, and initialize it
in intel_idle_probe(), and set other
Currently getting the sample period is always thru complex
calculation: get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5).
But just like the watchdog_thresh, which is not changed often.
So we can store the sample period as a variable, and set it as
__read_mostly type.
Signed-off-by: liu
Currently, in function smp_call_function_many/single, it will give WARN just in
case
of irqs_disabled(), but it is not enough.
In many other cases such as softirq handling/interrupt handling, the two APIs
still
can not be called, just as the smp_call_function_many() comments said:
* You must
In ack_apic_level(), when there is pending affinity setting for current irq,
ioapic_irqd_mask()/ioapic_irqd_unmask() will try to mask the irq and do the
irq affinity setting.
But at this time, it is possibility that the irq has been masked, in this case,
we should not unmask it directly in
Currently the functions smp_call_function_many()/single() will
give a WARN()ing only in the case of irqs_disabled(), but that
check is not enough to guarantee execution of the SMP
cross-calls.
In many other cases such as softirq handling/interrupt handling,
the two APIs still can not be called,
Currently the functions smp_call_function_many()/single() will
give a WARN()ing only in the case of irqs_disabled(), but that
check is not enough to guarantee execution of the SMP
cross-calls.
In many other cases such as softirq handling/interrupt handling,
the two APIs still can not be called,
Because the IRQF_DISABLED as the flag is now a NOOP and has been
deprecated and in hardirq context the interrupt is disabled.
so in usb/host code:
Removing the usage of flag IRQF_DISABLED;
Removing the calling local_irq save/restore actions in irq
handler usb_hcd_irq();
Signed-off-by: liu
Because the IRQF_DISABLED as the flag is now a NOOP and has been
deprecated and in hardirq context the interrupt is disabled.
so in usb/host code:
Removing the usage of flag IRQF_DISABLED;
Removing the calling local_irq save/restore actions in irq
handler usb_hcd_irq();
Signed-off-by: liu
Like 8250 driver, when pstore is registered as a console,
to avoid recursive spinlocks when panic happening, change the
spin_lock_irqsave to spin_trylock_irqsave when oops_in_progress
is true.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
fs/pstore/platform.c |7 ++-
1 files
In handle_edge_irq(), currently do not care about the flag IRQS_ONESHOT,
but there are many edge interrupt handler with irq thread need it indeed,
so implement here.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
kernel/irq/chip.c |8 +++-
1 files changed, 7 insertions(+), 1
Sorry, update the patch.
On Tue, 2012-09-18 at 20:54 +0800, Chuansheng Liu wrote:
In handle_edge_irq(), currently do not care about the flag IRQS_ONESHOT,
but there are many edge interrupt handler with irq thread need it indeed,
so implement here.
Signed-off-by: liu chuansheng chuansheng
When one CPU is going down, and smpboot_register_percpu_thread is called,
there is the race issue below:
T1(CPUA): T2(CPUB):
_cpu_down()smpboot_register_percpu_thread()
smpboot_park_threads() ...
In get_sample_period(), unsigned long is not enough:
watchdog_thresh * 2 * (NSEC_PER_SEC / 5)
case1: watchdog_thresh is 10 by default,
the sample value will be: 0xEE6B 2800
case2: set watchdog_thresh is 20,
the sample value will be: 0x1 DCD6 5000
From case2, we need use u64 to express the
By default, the watchdog threshold is 10, it means every 4s
every CPU will receive one hrtimer interrupt, for low power
device, it will cause 4-5mV power impact when device is deep
sleep.
So here want to optimize it as below:
4s + 4s + 4s + 4s + 4s
==
12s + 2s + 2s + 2s + 2s
3/5 1/10 1/10
By default, the watchdog threshold is 10, it means every 4s
every CPU will receive one hrtimer interrupt, for low power
device, it will cause 4-5mV power impact when device is deep
sleep.
So here want to optimize it as below:
4s + 4s + 4s + 4s + 4s
==
1s + 9s + 9s ...
Or
1s + 1s..+ 9s + 9s
Commit f1aac484f7(Take a pm_runtime reference on DAPM devices that are enabled)
introduced runtime_get/put calling when devices are in off/non-off bias.
It is based on:
1/ device from off to non-off bias is called thru dapm_pre_sequence_async;
2/ device from non-off to off bias is called thru
The runtime_get_sync() is called during sdio_bus_probe(), then the device
will be kept in active runtime state, so not neccessary to call
runtime_get_sync/put_noidle() again in sdio_bus_remove().
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/mmc/core/sdio_bus.c |8
Just found some cases that some codec drivers set the bias to _STANDBY and
set idle_bias_off to 1 during probing.
It will cause unpaired runtime_get_sync/put() issue. Also as Mark suggested,
there is no reason to start from _STANDBY bias with idle_bias_off == 1.
So here giving one warning when
This patch fix the below build error:
drivers/built-in.o: In function `twl_probe':
drivers/mfd/twl-core.c:1256: undefined reference to `devm_regmap_init_i2c'
make: *** [vmlinux] Error 1
Signed-off-by: liu chuansheng chuansheng@intel.com
---
drivers/mfd/Kconfig |1 +
1 files changed, 1
Meet one panic issue as below stack:
1[11340.226404] BUG: unable to handle kernel NULL pointer dereference at
0008
4[11340.226619] EIP is at __sco_sock_close+0xe8/0x1a0
4[11340.226629] EAX: f063a740 EBX: ECX: f58f4544 EDX:
4[11340.226640] ESI: dec83e00 EDI: 5f9a081f EBP:
After called device_initialize(), even device_add() returns
error, we still need use the put_device() to release the reference
to call rtd_release(), which will do the free() action.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
sound/soc/soc-core.c |2 ++
1 files changed, 2
When probing aux_dev, initializing is as below:
device_initialize()
device_add()
So when remove aux_dev, we need do as below:
device_del()
device_put()
Otherwise, the rtd_release() will not be called.
So here using device_unregister() to replace device_del(),
like the action in
Subject: [PATCH] mmc,sdio: Fix the panic due to devname NULL when calling
pm_runtime_set_active()
Meet one panic as the below:
1[ 15.067350] BUG: unable to handle kernel NULL pointer dereference at
(null)
1[ 15.074455] IP: [c1496a42] strlen+0x12/0x20
4[ 15.078803] *pde =
0[
In below call trace:
mmc_rescan
- mmc_rescan_try_freq()
- mmc_attach_sdio()
- mmc_sdio_init_card()
...
pm_runtime_set_active()
...
mmc_add_card()
The dev name is set until in mmc_add_card(), but before that, it is
possible the dev name is needed, for
In function rpm_suspend/resume(), when going into the for(;;),
the pre-condition judgement has been done, and the variable runtime_status
are always protected by power.lock, so it is not necessary to judge
them again before unlock_irq power.lock in for(;;).
This patch clean them up.
For the case that in rpm_suspend() and deferred_resume is true,
the rpm_resume() will be called, and if rpm_resume() return error,
at this time, the device is still in SUSPENDED state.
In this case it is still an suspend succeeding case, we should
return non-error instead of EAGAIN.
An example
Meet the case when the request_threaded_irq() with the same irq
is called twice continually, get the below mismatch info:
IRQ handler type mismatch for IRQ 323
Here give a right warning that like below:
Trying to setup already-setup non-shared IRQ 323
Signed-off-by: liu chuansheng
When one thread is calling sys_ioctl(), and another thread is calling
sys_close(), current code has protected most cases.
But for the below case, it will cause issue:
T1T2 T3
sys_close(oldfile)sys_open(newfile)
Commit(88d2613) removed the pm_runtime_put_sync() from pci_pm_complete()
to PM core code device_complete().
Here the pci_pm_complete() is doing the same work which can be done in
device_complete(), so we can remove it directly.
Signed-off-by: liu chuansheng chuansheng@intel.com
---
Currently the acpi_os_sleep() is using the schedule_timeout_interruptible(),
which can be interrupted by signal, which causes the real sleep time is shorter.
According to the ACPI spec:
The Sleep term is used to implement long-term timing requirements.
Execution is delayed for at least the
In function sdhci_request(), it is possible to do the tuning execution
like below:
sdhci_request() {
spin_lock_irqsave(host-lock, flags);
host-mrq = mrq;
...
spin_unlock_irqrestore(host-lock, flags);
=== Here it is possible one pending finish_tasklet get running
In release_nodes(), it will call dr-node.release() and kfree
dr one by one.
But sometimes the previous dr maybe be used by next .release(),
such as:
[50314.855534] [c12b172f] ? synchronize_irq+0x3f/0xb0
[50314.861193] [c12b18e9] __free_irq+0x149/0x200
[50314.866367] [c12b19e3]
Here module parameters ahci_em_messages and devslp_idle_timeout can
be set as static and __read_mostly.
Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
drivers/ata/libahci.c |5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/ata/libahci.c
Here module parameters ahci_em_messages and devslp_idle_timeout can
be set as static and __read_mostly.
Signed-off-by: Liu, Chuansheng chuansheng@intel.com
---
drivers/ata/libahci.c |7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/ata/libahci.c
Meet the case that on CPU2, there are just two tasks to be run,
one is one rt task A, another is idle task, but at this time,
and the rt_rq is throttled.
CPU1 CPU2
staying in idle task;
waking up rt task A on
target CPU 2;
When enabling mutex debugging, in case the imbalanced mutex_unlock()
is called, we still get the warning like below:
[ 364.208284] DEBUG_LOCKS_WARN_ON(lock-owner != current)
But in that case, it is due to imbalanced mutex_unlock calling, and
the lock-owner is NULL.
Here we can enhance the case
...@intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
kernel/irq/manage.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..d3bf660 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -802,8
rightly also.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/usb/gadget/f_fs.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/drivers/usb/gadget/f_fs.c b/drivers/usb/gadget/f_fs.c
index 2b43343..31ee7af 100644
--- a/drivers/usb/gadget/f_fs.c
+++ b
rightly also.
Acked-by: Michal Nazarewicz min...@mina86.com
Reviewed-by: David Cohen david.a.co...@linux.intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/usb/gadget/f_fs.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/usb/gadget
Currently, the dpm_resume_noirq() is done synchronously, and for PCI devices
pci_pm_resume_noirq():
pci_pm_resume_noirq()
pci_pm_default_resume_early()
pci_power_up()
pci_raw_set_power_state()
Which set the device from D3hot to D0 mostly, for every device, there will
be one
Hello,
This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.
Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.
With these
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 24 ++--
include/linux/pm.h| 2 ++
2 files changed, 24 insertions
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.
This patch is for suspend_noirq phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 68
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.
This patch is for resume_early phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 55
) at least.
With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 66 +++
1 file
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.
This patch is for suspend_late phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 64
Hello,
This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.
Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.
With these
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.
This patch is for suspend_noirq phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 68
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 24 ++--
include/linux/pm.h| 2 ++
2 files changed, 24 insertions
) at least.
With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 66 +++
1 file
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.
This patch is for suspend_late phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 66
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.
This patch is for resume_early phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 55
Hello,
This patch series are for enabling the asynchronous threads for the phases
resume_noirq, resume_early, suspend_noirq and suspend_late.
Just like commit 5af84b82701a and 97df8c12995, with async threads it will
reduce the system suspending and resuming time significantly.
With these
The patch is a helper adding two new flags for implementing
async threads for suspend_noirq and suspend_late.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 24 ++--
include/linux/pm.h| 2 ++
2 files changed, 24 insertions
) at least.
With the way of asynchronous threads, we just need wait d3_delay
time once in parallel for each calling, which saves much time to
resume quickly.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 66 +++
1 file
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_noirq
time significantly.
This patch is for suspend_noirq phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 68
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall resume_early
time significantly.
This patch is for resume_early phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 55
In analogy with commits 5af84b82701a and 97df8c12995, using
asynchronous threads can improve the overall suspend_late
time significantly.
This patch is for suspend_late phase.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
drivers/base/power/main.c | 66
Change the comment chasnge to change.
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
kernel/irq/manage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 481a13c..4802295 100644
--- a/kernel/irq/manage.c
+++ b/kernel
.
Here put the increasing thread-active before setting bit
RUNTHREAD, which can resolve such race.
Signed-off-by: xiaoming wang xiaoming.w...@intel.com
Signed-off-by: Chuansheng Liu chuansheng@intel.com
---
kernel/irq/handle.c | 21 -
1 file changed, 20 insertions(+), 1
When handling the scsi_request_fn(), when the sd offline happened, sometimes
we will hit the print storm as below:
3[ 95.365837] sd 0:0:0:0: rejecting I/O to offline device
3[ 95.368633] sd 0:0:0:0: rejecting I/O to offline device
3[ 95.369881] sd 0:0:0:0: rejecting I/O to offline device
3[
In our platform, we hit the the stolen region initialization failure case,
such as below log:
[drm:i915_stolen_to_physical] *ERROR* conflict detected with stolen region:
[0x7b00]
And it causes the dev_priv-mm.stolen_base is NULL, in this case, we should
avoid accessing it any more.
Here is
1 - 100 of 266 matches
Mail list logo