Re: [PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop

2024-02-25 Thread Jason Wang
Michael S. Tsirkin wrote: > > > > > > > > > On Fri, Jul 21, 2023 at 04:37:00PM +0200, Maxime Coquelin > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > &

Re: [PATCH net-next v4 2/2] virtio-net: add cond_resched() to the command waiting loop

2024-02-22 Thread Michael S. Tsirkin
> > > > > > > > > > > > > > > > > > > > > On 7/20/23 23:02, Michael S. Tsirkin wrote: > > > > > > > > > > On Thu, Jul 20, 2023 at 01:26:20PM -0700, Shannon Nelson > > > >

[PATCH v3 09/24] ARM: at91: pm: add support for waiting MCK1..4

2021-04-15 Thread Claudiu Beznea
SAMA7G5 has 5 master clocks 0..4. MCK0 is controlled differently than MCK 1..4. MCK 1..4 should also be saved/restored in the last phase of suspend/resume. Thus, adapt wait_mckrdy to support also MCK1..4. Signed-off-by: Claudiu Beznea --- arch/arm/mach-at91/pm_suspend.S | 48

[PATCH 5.4 077/111] scsi: ufs: Avoid busy-waiting by eliminating tag conflicts

2021-04-12 Thread Greg Kroah-Hartman
From: Bart Van Assche [ Upstream commit 7252a3603015f1fd04363956f4b72a537c9f9c42 ] Instead of tracking which tags are in use in the ufs_hba.lrb_in_use bitmask, rely on the block layer tag allocation mechanism. This patch removes the following busy-waiting loop if ufshcd_issue_devman_upiu_cmd

[PATCH v2 09/24] ARM: at91: pm: add support for waiting MCK1..4

2021-04-09 Thread Claudiu Beznea
SAMA7G5 has 5 master clocks 0..4. MCK0 is controlled differently than MCK 1..4. MCK 1..4 should also be saved/restored in the last phase of suspend/resume. Thus, adapt wait_mckrdy to support also MCK1..4. Signed-off-by: Claudiu Beznea --- arch/arm/mach-at91/pm_suspend.S | 48

[PATCH v14 5/6] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA

2021-04-01 Thread Alex Kogan
n->numa_node = cn->real_numa_node; + /* * Try and put the time otherwise spent spin waiting on * _Q_LOCKED_PENDING_MASK to use by sorting our lists. -- 2.24.3 (Apple Git-128)

[PATCH 09/24] ARM: at91: pm: add support for waiting MCK1..4

2021-03-31 Thread Claudiu Beznea
SAMA7G5 has 5 master clocks 0..4. MCK0 is controlled differently than MCK 1..4. MCK 1..4 should also be saved/restored in the last phase of suspend/resume. Thus, adapt wait_mckrdy to support also MCK1..4. Signed-off-by: Claudiu Beznea --- arch/arm/mach-at91/pm_suspend.S | 48

[RFC PATCH v2 06/11] bfq: expire other class if CLASS_RT is waiting

2021-03-12 Thread brookxu
From: Chunguang Xu Expire bfqq not belong to CLASS_RT and CLASS_RT is waiting for service, we can further guarantee the latency for CLASS_RT. Signed-off-by: Chunguang Xu --- block/bfq-iosched.c | 15 ++- block/bfq-iosched.h | 8 block/bfq-wf2q.c| 12 3

[RFC PATCH v2 08/11] bfq: disallow idle if CLASS_RT waiting for service

2021-03-12 Thread brookxu
From: Chunguang Xu if CLASS_RT is waiting for service,queues belong to other class disallow idle, so that a schedule can be invoked in time. Signed-off-by: Chunguang Xu --- block/bfq-iosched.c | 5 + 1 file changed, 5 insertions(+) diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c

[RFC PATCH 4/8] bfq: expire bfqq if a higher priority class is waiting

2021-03-08 Thread brookxu
From: Chunguang Xu From: Chunguang Xu Expire bfqq If a higher priority class is waiting to be served, we can further guarantee the delay of the higher priority class. Signed-off-by: Chunguang Xu --- block/bfq-iosched.c | 14 ++ 1 file changed, 14 insertions(+) diff --git

[PATCH v3 0/3] drm/msm: fix for "Timeout waiting for GMU OOB set GPU_SET: 0x0"

2021-01-28 Thread Eric Anholt
Updated commit messages over v2, no code changes. Eric Anholt (3): drm/msm: Fix race of GPU init vs timestamp power management. drm/msm: Fix races managing the OOB state for timestamp vs timestamps. drm/msm: Clean up GMU OOB set/clear handling. drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 105

Re: [PATCH v2 3/5] drm/panel-simple: Retry if we timeout waiting for HPD

2021-01-27 Thread Doug Anderson
Hi, On Mon, Jan 25, 2021 at 12:28 PM Stephen Boyd wrote: > > > +/* > > + * Some panels simply don't always come up and need to be power cycled to > > + * work properly. We'll allow for a handful of retries. > > + */ > > +#define MAX_PANEL_PREPARE_TRIES5 > > Is this define used

Re: [PATCH v2 3/5] drm/panel-simple: Retry if we timeout waiting for HPD

2021-01-25 Thread Stephen Boyd
Quoting Douglas Anderson (2021-01-15 14:44:18) > On an Innolux N116BCA panel that I have in front of me, sometimes HPD > simply doesn't assert no matter how long you wait for it. As per the > very wise advice of The IT Crowd ("Have you tried turning it off and > on again?") it appears that power

Let lockdep complain when locks are taken while waiting for userspace.

2021-01-18 Thread Christian König
Hi guys, because of the Vulkan graphics API we have a specialized synchronization object to handle both inter process as well as process to hardware synchronization. The problem is now that when drivers call this interface with some lock help it is trivial to create a deadlock when those locks

[PATCH 5.10 078/152] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-18 Thread Greg Kroah-Hartman
From: Xu Yilun [ Upstream commit 6170d077bf92c5b3dfbe1021688d3c0404f7c9e9 ] The xfer waiting time is the result of xfer->len / xfer->speed_hz. This patch makes the assumption of 100khz xfer speed if the xfer->speed_hz is not assigned and stays 0. This avoids the divide by 0 issue an

[PATCH v2 3/5] drm/panel-simple: Retry if we timeout waiting for HPD

2021-01-15 Thread Douglas Anderson
re attributed to the fact that it's pre-production and/or can be fixed, retries clearly can help in some cases and really don't hurt. Signed-off-by: Douglas Anderson --- Changes in v2: - ("drm/panel-simple: Retry if we timeout waiting for HPD") new for v2. drivers/gpu/drm/pa

[PATCH AUTOSEL 5.10 29/51] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-12 Thread Sasha Levin
From: Xu Yilun [ Upstream commit 6170d077bf92c5b3dfbe1021688d3c0404f7c9e9 ] The xfer waiting time is the result of xfer->len / xfer->speed_hz. This patch makes the assumption of 100khz xfer speed if the xfer->speed_hz is not assigned and stays 0. This avoids the divide by 0 issue an

Re: [PATCH v3] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-04 Thread Mark Brown
On Mon, 4 Jan 2021 09:29:09 +0800, Xu Yilun wrote: > The xfer waiting time is the result of xfer->len / xfer->speed_hz. This > patch makes the assumption of 100khz xfer speed if the xfer->speed_hz is > not assigned and stays 0. This avoids the divide by 0 issue and ensures >

[PATCH v3] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-03 Thread Xu Yilun
The xfer waiting time is the result of xfer->len / xfer->speed_hz. This patch makes the assumption of 100khz xfer speed if the xfer->speed_hz is not assigned and stays 0. This avoids the divide by 0 issue and ensures a reasonable tolerant waiting time. Signed-off-by: Xu Yilun --

Re: [PATCH v2] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-03 Thread Xu Yilun
On Sat, Jan 02, 2021 at 11:11:14AM -0300, Fabio Estevam wrote: > On Sat, Jan 2, 2021 at 12:07 AM Xu Yilun wrote: > > > > The xfer waiting time is the result of xfer->len / xfer->speed_hz. This > > patch makes the assumption of 1khz xfer speed if the xfer->speed_h

Re: [PATCH v2] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-02 Thread Fabio Estevam
On Sat, Jan 2, 2021 at 12:07 AM Xu Yilun wrote: > > The xfer waiting time is the result of xfer->len / xfer->speed_hz. This > patch makes the assumption of 1khz xfer speed if the xfer->speed_hz is You missed to update the commit log to 100kHz.

[PATCH v2] spi: fix the divide by 0 error when calculating xfer waiting time

2021-01-01 Thread Xu Yilun
The xfer waiting time is the result of xfer->len / xfer->speed_hz. This patch makes the assumption of 1khz xfer speed if the xfer->speed_hz is not assigned and stays 0. This avoids the divide by 0 issue and ensures a reasonable tolerant waiting time. Signed-off-by: Xu Yilun --

Re: [PATCH 2/2] spi: fix the divide by 0 error when calculating xfer waiting time

2020-12-31 Thread Mark Brown
On Thu, Dec 31, 2020 at 11:23:37AM +0800, Xu Yilun wrote: > On Wed, Dec 30, 2020 at 01:46:44PM +, Mark Brown wrote: > > > BTW, Could we keep the spi->max_speed_hz if no controller->max_speed_hz? > > > Always clamp the spi->max_speed_hz to 0 makes no sense. > > Right, that's the fix. > Seems

Re: [PATCH 2/2] spi: fix the divide by 0 error when calculating xfer waiting time

2020-12-30 Thread Xu Yilun
On Wed, Dec 30, 2020 at 01:46:44PM +, Mark Brown wrote: > On Wed, Dec 30, 2020 at 10:24:20AM +0800, Xu Yilun wrote: > > On Tue, Dec 29, 2020 at 01:13:08PM +, Mark Brown wrote: > > > > Does this still apply with current code? There have been some fixes in > > > this area which I think

Re: [PATCH 2/2] spi: fix the divide by 0 error when calculating xfer waiting time

2020-12-30 Thread Mark Brown
On Wed, Dec 30, 2020 at 10:24:20AM +0800, Xu Yilun wrote: > On Tue, Dec 29, 2020 at 01:13:08PM +, Mark Brown wrote: > > Does this still apply with current code? There have been some fixes in > > this area which I think should ensure that we don't turn the speed down > > to 0 if the

Re: [PATCH 2/2] spi: fix the divide by 0 error when calculating xfer waiting time

2020-12-29 Thread Xu Yilun
On Tue, Dec 29, 2020 at 01:13:08PM +, Mark Brown wrote: > On Tue, Dec 29, 2020 at 01:27:42PM +0800, Xu Yilun wrote: > > The xfer waiting time is the result of xfer->len / xfer->speed_hz, but > > when the following patch is merged, > > > > commit 9326e4f1e5d

Re: [PATCH 2/2] spi: fix the divide by 0 error when calculating xfer waiting time

2020-12-29 Thread Mark Brown
On Tue, Dec 29, 2020 at 01:27:42PM +0800, Xu Yilun wrote: > The xfer waiting time is the result of xfer->len / xfer->speed_hz, but > when the following patch is merged, > > commit 9326e4f1e5dd ("spi: Limit the spi device max speed to controller's max > speed") >

[PATCH 2/2] spi: fix the divide by 0 error when calculating xfer waiting time

2020-12-28 Thread Xu Yilun
The xfer waiting time is the result of xfer->len / xfer->speed_hz, but when the following patch is merged, commit 9326e4f1e5dd ("spi: Limit the spi device max speed to controller's max speed") the xfer->speed_hz may always be clamped to 0 if the controller doesn't provi

[PATCH 5.10 441/717] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-28 Thread Greg Kroah-Hartman
From: Guido Günther [ Upstream commit ee46d16d2e40bebc2aa790fd7b6a056466ff895c ] It can take multiple iterations until all components for an attached DSI bridge are up leading to several: [3.796425] mxsfb 3032.lcd-controller: Cannot connect bridge: -517 [3.816952] mxsfb

[PATCH v13 5/6] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA

2020-12-22 Thread Alex Kogan
Try and put the time otherwise spent spin waiting on * _Q_LOCKED_PENDING_MASK to use by sorting our lists. -- 2.24.3 (Apple Git-128)

Re: [PATCH v1 1/1] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-15 Thread Daniel Vetter
On Tue, Dec 15, 2020 at 09:23:38AM +0100, Guido Günther wrote: > It can take multiple iterations until all components for an attached DSI > bridge are up leading to several: > > [3.796425] mxsfb 3032.lcd-controller: Cannot connect bridge: -517 > [3.816952] mxsfb

[PATCH v1 1/1] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-15 Thread Guido Günther
It can take multiple iterations until all components for an attached DSI bridge are up leading to several: [3.796425] mxsfb 3032.lcd-controller: Cannot connect bridge: -517 [3.816952] mxsfb 3032.lcd-controller: [drm:mxsfb_probe [mxsfb]] *ERROR* failed to attach bridge: -517

[PATCH v1 0/1] drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge

2020-12-15 Thread Guido Günther
the only DRM_DEV_ERROR() usage, the rest of the driver uses dev_err(). Guido Günther (1): drm: mxsfb: Silence -EPROBE_DEFER while waiting for bridge drivers/gpu/drm/mxsfb/mxsfb_drv.c | 10 -- 1 file changed, 4 insertions(+), 6 deletions(-) -- 2.29.2

Re:  PANICKED: Waiting for review: Test report for kernel 5.9.11 (stable-queue)

2020-11-30 Thread Xiumei Mu
- Original Message - > From: "CKI Project" > To: skt-results-mas...@redhat.com > Cc: "Yi Zhang" , "Xiong Zhou" , > "Rachel Sibley" , "Xiumei > Mu" , "Jianwen Ji" , "Hangbin Liu" > , "

Continental Fitness Spa Timisoara " Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?dmr7 "

2020-11-24 Thread Continental Fitness Spa Timisoara
 Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?dmr7 , Vă mulțumim pentru mesajul dumneavoastră. Vom revenim cu un răspuns în cel mai scurt timp! We thank you for your message. We will get back to you as soon as possible. -- This e-mail was sent

 Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?f5p9 

2020-11-23 Thread  Secret meetings and single girls are waiting for you . Answer me here : http : //bit . do/fLifv?f5p9 
Message Body:  Secret meetings and single girls are waiting for you. Answer me here: http://bit.do/fLifv?f5p9  -- This e-mail was sent from a contact form on BeTheme (http://themes.muffingroup.com/betheme)

linux-kernel@vger.kernel.org waiting to hear from you

2020-11-23 Thread Mazer
Greetings, I write to solicit for your assistance in a benefiting business proposal and I will be pleased to explain my plans and carried it out legally and transparently, so i want to know if i can trust you to discuss details of the business and send you my proposal and my credentials once

[PATCH v2 4/5] locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED

2020-11-20 Thread Waiman Long
lock, all the current waiting readers will be allowed to join. Other readers that come after that will not be allowed to prevent writer starvation. In the case of RWSEM_WAKE_READ_OWNED, not all currently waiting readers can be woken up if the first waiter happens to be a writer. Complete the phase-

[PATCH v2 13/17] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-20 Thread Saravana Kannan
To check if a device is still waiting for its supplier devices to be added, we used to check if the devices is in a global waiting_for_suppliers list. Since the global list will be deleted in subsequent patches, this patch stops using this check. Instead, this patch uses a more device specific

Re: [PATCH v1 14/18] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-20 Thread Saravana Kannan
On Mon, Nov 16, 2020 at 8:34 AM Rafael J. Wysocki wrote: > > On Thu, Nov 5, 2020 at 12:24 AM Saravana Kannan wrote: > > > > To check if a device is still waiting for its supplier devices to be > > added, we used to check if the devices is in a global > > wai

Re: [PATCH 4/5] locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED

2020-11-19 Thread Waiman Long
On 11/17/20 11:53 PM, Davidlohr Bueso wrote: On Tue, 17 Nov 2020, Waiman Long wrote: The rwsem wakeup logic has been modified by commit d3681e269fff ("locking/rwsem: Wake up almost all readers in wait queue") to wake up all readers in the wait queue if the first waiter is a reader. In the case

Re: [PATCH 4/5] locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED

2020-11-17 Thread Davidlohr Bueso
On Tue, 17 Nov 2020, Waiman Long wrote: The rwsem wakeup logic has been modified by commit d3681e269fff ("locking/rwsem: Wake up almost all readers in wait queue") to wake up all readers in the wait queue if the first waiter is a reader. In the case of RWSEM_WAKE_READ_OWNED, not all readers can

[PATCH 4/5] locking/rwsem: Wake up all waiting readers if RWSEM_WAKE_READ_OWNED

2020-11-17 Thread Waiman Long
The rwsem wakeup logic has been modified by commit d3681e269fff ("locking/rwsem: Wake up almost all readers in wait queue") to wake up all readers in the wait queue if the first waiter is a reader. In the case of RWSEM_WAKE_READ_OWNED, not all readers can be woken up if the first waiter happens to

[PATCH v12 5/5] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA

2020-11-17 Thread Alex Kogan
wait queue, no need to use +* the fake NUMA node ID. +*/ + if (cn->numa_node == CNA_PRIORITY_NODE) + cn->numa_node = cn->real_numa_node; + + /* * Try and put the time otherwise spent spin waiting on

Re: [PATCH v1 14/18] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-16 Thread Rafael J. Wysocki
On Thu, Nov 5, 2020 at 12:24 AM Saravana Kannan wrote: > > To check if a device is still waiting for its supplier devices to be > added, we used to check if the devices is in a global > waiting_for_suppliers list. Since the global list will be deleted in > subsequent patches, t

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-11 Thread Miklos Szeredi
On Wed, Nov 11, 2020 at 8:42 AM Eric W. Biederman wrote: > > Miklos Szeredi writes: > > Okay, so the problem with making the wait_event() at the end of > > request_wait_answer() killable is that it would allow compromising the > > server's integrity by unlocking the VFS level lock (which

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-10 Thread Eric W. Biederman
You have a good point about the looping issue. I wonder if there is a >> way to enhance this comparatively simple approach to prevent the more >> complex scenario you mention. > > Let's take a concrete example: > > - task A is "server" for fuse fs a > - task B is &

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Miklos Szeredi
is > very annoying not to be able to kill processes with SIGKILL or the OOM > killer. > > You have a good point about the looping issue. I wonder if there is a > way to enhance this comparatively simple approach to prevent the more > complex scenario you mention. Let's take a concre

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Eric W. Biederman
Miklos Szeredi writes: > On Mon, Nov 9, 2020 at 1:48 PM Alexey Gladkov > wrote: >> >> This patch removes one kind of the deadlocks inside the fuse daemon. The >> problem appear when the fuse daemon itself makes a file operation on its >> filesystem and receives a fatal signal. >> >> This

Re: [RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Miklos Szeredi
On Mon, Nov 9, 2020 at 1:48 PM Alexey Gladkov wrote: > > This patch removes one kind of the deadlocks inside the fuse daemon. The > problem appear when the fuse daemon itself makes a file operation on its > filesystem and receives a fatal signal. > > This deadlock can be interrupted via fusectl

[RESEND PATCH v3] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-11-09 Thread Alexey Gladkov
struct fuse_conn { /* Do not show mount options */ unsigned int no_mount_options:1; + /** Do not check fusedev_file (virtiofs) */ + unsigned int check_fusedev_file:1; + /** The number of requests waiting for completion */ atomic_t num_waiting; diff --git

[PATCH v1 14/18] driver core: Use device's fwnode to check if it is waiting for suppliers

2020-11-04 Thread Saravana Kannan
To check if a device is still waiting for its supplier devices to be added, we used to check if the devices is in a global waiting_for_suppliers list. Since the global list will be deleted in subsequent patches, this patch stops using this check. Instead, this patch uses a more device specific

[PATCH 5.9 174/391] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-11-03 Thread Greg Kroah-Hartman
[qcom_q6v5] handle_nested_irq+0xd0/0x138 qcom_smp2p_intr+0x188/0x200 irq_thread_fn+0x2c/0x70 irq_thread+0xfc/0x14c kthread+0x11c/0x12c ret_from_fork+0x10/0x18 This busy loop naturally lends itself to using a wait queue so that each thread that tries to send a message will sleep waiting

[PATCH 5.4 158/408] spi: omap2-mcspi: Improve performance waiting for CHSTAT

2020-10-27 Thread Greg Kroah-Hartman
From: Aswath Govindraju [ Upstream commit 7b1d96813317358312440d0d07abbfbeb0ef8d22 ] This reverts commit 13d515c796 (spi: omap2-mcspi: Switch to readl_poll_timeout()). The amount of time spent polling for the MCSPI_CHSTAT bits to be set on AM335x-icev2 platform is less than 1us (about 0.6us)

[PATCH 5.8 254/633] spi: omap2-mcspi: Improve performance waiting for CHSTAT

2020-10-27 Thread Greg Kroah-Hartman
From: Aswath Govindraju [ Upstream commit 7b1d96813317358312440d0d07abbfbeb0ef8d22 ] This reverts commit 13d515c796 (spi: omap2-mcspi: Switch to readl_poll_timeout()). The amount of time spent polling for the MCSPI_CHSTAT bits to be set on AM335x-icev2 platform is less than 1us (about 0.6us)

[PATCH 5.9 300/757] spi: omap2-mcspi: Improve performance waiting for CHSTAT

2020-10-27 Thread Greg Kroah-Hartman
From: Aswath Govindraju [ Upstream commit 7b1d96813317358312440d0d07abbfbeb0ef8d22 ] This reverts commit 13d515c796 (spi: omap2-mcspi: Switch to readl_poll_timeout()). The amount of time spent polling for the MCSPI_CHSTAT bits to be set on AM335x-icev2 platform is less than 1us (about 0.6us)

[PATCH AUTOSEL 5.9 140/147] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-10-26 Thread Sasha Levin
[qcom_q6v5] handle_nested_irq+0xd0/0x138 qcom_smp2p_intr+0x188/0x200 irq_thread_fn+0x2c/0x70 irq_thread+0xfc/0x14c kthread+0x11c/0x12c ret_from_fork+0x10/0x18 This busy loop naturally lends itself to using a wait queue so that each thread that tries to send a message will sleep waiting

[PATCH AUTOSEL 5.8 128/132] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-10-26 Thread Sasha Levin
[qcom_q6v5] handle_nested_irq+0xd0/0x138 qcom_smp2p_intr+0x188/0x200 irq_thread_fn+0x2c/0x70 irq_thread+0xfc/0x14c kthread+0x11c/0x12c ret_from_fork+0x10/0x18 This busy loop naturally lends itself to using a wait queue so that each thread that tries to send a message will sleep waiting

Waiting for your urgent response.

2020-10-07 Thread Mr. Nor Hizam Hashim.
explaining how the fund will be transferred to you Please continue to achieve the purpose. Waiting for your urgent response. Attentively Mr. Nor Hizam Hashim.

Re: [Openipmi-developer] [PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-10-07 Thread Corey Minyard
On Thu, Sep 10, 2020 at 11:08:40AM +, Boehme, Markus via Openipmi-developer wrote: > > > - && ipmi_version_minor(id) >= 5)) { > > > - unsigned int set; > > > + if (ipmi_version_major(id) == 1 && ipmi_version_minor(id) < 5) { > > This is incorrect, it

[PATCH v1] fuse: Abort waiting for a response if the daemon receives a fatal signal

2020-10-01 Thread Alexey Gladkov
struct fuse_conn { /* Do not show mount options */ unsigned int no_mount_options:1; + /** Do not check fusedev_file (virtiofs) */ + unsigned int check_fusedev_file:1; + /** The number of requests waiting for completion */ atomic_t num_waiting; diff --git

[PATCH v11 5/5] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA

2020-09-15 Thread Alex Kogan
Try and put the time otherwise spent spin waiting on * _Q_LOCKED_PENDING_MASK to use by sorting our lists. -- 2.21.1 (Apple Git-122.3)

[PATCH 5.8 16/16] mptcp: free acked data before waiting for more memory

2020-09-11 Thread Greg Kroah-Hartman
From: Florian Westphal [ Upstream commit 1cec170d458b1d18f6f1654ca84c0804a701c5ef ] After subflow lock is dropped, more wmem might have been made available. This fixes a deadlock in mptcp_connect.sh 'mmap' mode: wmem is exhausted. But as the mptcp socket holds on to already-acked data (for

[Spam] We are still waiting for your email...

2020-09-11 Thread piyin . crhe
Dear Beneficiary, We wish to inform you that a power of attorney was forwarded to our office by two gentlemen regarding your unclaimed fund of $56 Million Dollar. One of them is an American citizen named Mr. Robert Porter and the other is Mr. Wilhelm Berg a Swedish citizen.We have be waiting

Re: [PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-09-10 Thread Boehme, Markus
t respond. This leads to an indefinite wait in the > > ipmi_msghandler's __scan_channels function, showing up as hung task > > messages for modprobe. > > > > Add a timeout waiting for the channel scan to complete. If the scan > > fails to complete within that time, treat

Re: [PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-09-07 Thread Corey Minyard
gt; messages for modprobe. > > Add a timeout waiting for the channel scan to complete. If the scan > fails to complete within that time, treat that like IPMI 1.0 and only > assume the presence of the primary IPMB channel at channel number 0. This patch is a significant rewrite of the

Re: [PATCH 2/3] ipmi: Add timeout waiting for device GUID

2020-09-07 Thread Corey Minyard
for modprobe. > > According to IPMI 2.0 specification chapter 20, the implementation of > the Get Device GUID command is optional. Therefore, add a timeout to > waiting for its response and treat the lack of one the same as missing a > device GUID. This patch looks good. It'

[PATCH 2/3] ipmi: Add timeout waiting for device GUID

2020-09-07 Thread Markus Boehme
of the Get Device GUID command is optional. Therefore, add a timeout to waiting for its response and treat the lack of one the same as missing a device GUID. Signed-off-by: Stefan Nuernberger Signed-off-by: Markus Boehme --- drivers/char/ipmi/ipmi_msghandler.c | 16 1 file

[PATCH 3/3] ipmi: Add timeout waiting for channel information

2020-09-07 Thread Markus Boehme
We have observed hosts with misbehaving BMCs that receive a Get Channel Info command but don't respond. This leads to an indefinite wait in the ipmi_msghandler's __scan_channels function, showing up as hung task messages for modprobe. Add a timeout waiting for the channel scan to complete

I AM WAITING FOR YOUR URGENT REPLY.....

2020-08-27 Thread Mrs.Ruff Lori Erica
-- Dear Friend. I am Mrs. Ruff Lori Erica. I am sending this brief letter to solicit your partnership to transfer a sum of 10.5Million Dollars into your reliable account as my business partner. However, it's my urgent need for foreign partner that made me to contact you for this transaction.

Re: [PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-27 Thread Jan Kara
Honza > > > On 08/26/2020 21:23, Jan Kara wrote: > > On Wed 05-08-20 09:35:51, Xianting Tian wrote: > > > When waiting for the completion of io, we need account iowait time. As > > > wait_for_completion() calls schedule_timeout(), w

Re: [PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-27 Thread Jan Kara
er pointless. Honza > On 08/26/2020 21:23, Jan Kara wrote: > On Wed 05-08-20 09:35:51, Xianting Tian wrote: > > When waiting for the completion of io, we need account iowait time. As > > wait_for_completion() calls schedule_timeout(), which doesn't account > > iowait time.

Re: [PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-26 Thread Jan Kara
On Wed 05-08-20 09:35:51, Xianting Tian wrote: > When waiting for the completion of io, we need account iowait time. As > wait_for_completion() calls schedule_timeout(), which doesn't account > iowait time. While wait_for_completion_io() calls io_schedule_timeout(), > which will ac

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-20 Thread Dmitry Vyukov
.com/x/repro.syz?x=1585998690 > > > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1228fea190 > > > > > > IMPORTANT: if you fix the issue, please add the following tag to the > > > commit: > > > Reported-by: syzbot+df400f2f24a1677c

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-20 Thread Andrii Nakryiko
> > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > > Reported-by: syzbot+df400f2f24a1677cd...@syzkaller.appspotmail.com > > > > unregister_netdevice: waiting for lo to become free. Usage count = 1 > > Based on the repro, it looks b

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-19 Thread syzbot
syzbot has bisected this issue to: commit 449325b52b7a6208f65ed67d3484fd7b7184477b Author: Alexei Starovoitov Date: Tue May 22 02:22:29 2018 + umh: introduce fork_usermode_blob() helper bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=11f8618690 start commit:

Re: unregister_netdevice: waiting for DEV to become free (4)

2020-08-19 Thread Dmitry Vyukov
ea391e86e81) > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=1585998690 > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1228fea190 > > IMPORTANT: if you fix the issue, please add the following tag to the commit: > Reported-by: syzbot+df400f2f24a1677cd...@syzkalle

unregister_netdevice: waiting for DEV to become free (4)

2020-08-19 Thread syzbot
://syzkaller.appspot.com/x/repro.c?x=1228fea190 IMPORTANT: if you fix the issue, please add the following tag to the commit: Reported-by: syzbot+df400f2f24a1677cd...@syzkaller.appspotmail.com unregister_netdevice: waiting for lo to become free. Usage count = 1 --- This report is generated by a bot

[PATCH] aio: use wait_for_completion_io() when waiting for completion of io

2020-08-05 Thread Xianting Tian
When waiting for the completion of io, we need account iowait time. As wait_for_completion() calls schedule_timeout(), which doesn't account iowait time. While wait_for_completion_io() calls io_schedule_timeout(), which will account iowait time. So using wait_for_completion_io() instead

I will be waiting to hear from you,

2020-08-04 Thread Hamzak Wadrago
Dear Friend, I decided to contact you after a careful thought that you may be capable of handling this business transaction which I explained below, I am the head of Accounts and Audit Department of Bank of Africa Ouagadougou. In my department while cross-checking the files of foreigners. since

[PATCH 5.4 39/90] nvme-tcp: fix possible hang waiting for icresp response

2020-08-03 Thread Greg Kroah-Hartman
From: Sagi Grimberg [ Upstream commit adc99fd378398f4c58798a1c57889872967d56a6 ] If the controller died exactly when we are receiving icresp we hang because icresp may never return. Make sure to set a high finite limit. Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver")

[PATCH 5.7 042/120] nvme-tcp: fix possible hang waiting for icresp response

2020-08-03 Thread Greg Kroah-Hartman
From: Sagi Grimberg [ Upstream commit adc99fd378398f4c58798a1c57889872967d56a6 ] If the controller died exactly when we are receiving icresp we hang because icresp may never return. Make sure to set a high finite limit. Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver")

WAITING FOR YOUR URGENT RESPONSE!!!

2020-08-01 Thread Mr. Ali Zango.
Dear Friend, I am Mr.Ali Zango Working with a reputable bank here in Burkina Faso as the manager in audit department. During our last banking audits we discovered an abandoned account belongs to one of our deceased customer, late Mr.Hamid Amine Razzaq, a billionaire businessman. Meanwhile,

[PATCH 4.4 09/54] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-30 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 4.9 09/61] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-30 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

Re: [PATCH v10 5/5] locking/qspinlock: Avoid moving certain threads between waiting queues in CNA

2020-07-28 Thread Waiman Long
e; if (cn->intra_count < intra_node_handoff_threshold) { + /* + * We are at the head of the wait queue, no need to use + * the fake NUMA node ID. + */ + if (cn->numa_node == CNA_PRIORITY_NODE) + cn->numa_node = cn->real_numa_node; + /* * Try and put the time otherwise spent spin waiting on * _Q_LOCKED_PENDING_MASK to use by sorting our lists. -- 2.18.1

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-28 Thread Stanimir Varbanov
_set_performance_state+0xdc/0x1b8 >>> dev_pm_genpd_set_performance_state+0xb8/0xf8 >>> q6v5_pds_disable+0x34/0x60 [qcom_q6v5_mss] >>> qcom_msa_handover+0x38/0x44 [qcom_q6v5_mss] >>> q6v5_handover_interrupt+0x24/0x3c [qcom_q6v5] >>> handle_neste

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-28 Thread Doug Anderson
0x24/0x3c [qcom_q6v5] > > handle_nested_irq+0xd0/0x138 > > qcom_smp2p_intr+0x188/0x200 > > irq_thread_fn+0x2c/0x70 > > irq_thread+0xfc/0x14c > > kthread+0x11c/0x12c > > ret_from_fork+0x10/0x18 > > > > This busy loop naturally lends itself

[PATCH 4.19 15/86] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 5.7 026/179] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 5.4 026/138] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

[PATCH 4.14 12/64] SUNRPC reverting d03727b248d0 ("NFSv4 fix CLOSE not waiting for direct IO compeletion")

2020-07-27 Thread Greg Kroah-Hartman
From: Olga Kornievskaia commit 65caafd0d2145d1dd02072c4ced540624daeab40 upstream. Reverting commit d03727b248d0 "NFSv4 fix CLOSE not waiting for direct IO compeletion". This patch made it so that fput() by calling inode_dio_done() in nfs_file_release() would wait uninterruptab

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-26 Thread Maulik Shah
itself to using a wait queue so that each thread that tries to send a message will sleep waiting on the waitqueue and only be woken up when a free slot is available. This should make things more predictable too because the scheduler will be able to sleep tasks that are waiting on a free tcs instead

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-26 Thread Stanimir Varbanov
0/0x138 > qcom_smp2p_intr+0x188/0x200 > irq_thread_fn+0x2c/0x70 > irq_thread+0xfc/0x14c > kthread+0x11c/0x12c > ret_from_fork+0x10/0x18 > > This busy loop naturally lends itself to using a wait queue so that each > thread that tries to send a message

Re: [PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
d_irq+0xd0/0x138 > qcom_smp2p_intr+0x188/0x200 > irq_thread_fn+0x2c/0x70 > irq_thread+0xfc/0x14c > kthread+0x11c/0x12c > ret_from_fork+0x10/0x18 > > This busy loop naturally lends itself to using a wait queue so that each > thread that tries to send a mess

[PATCH v2] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
+0x188/0x200 irq_thread_fn+0x2c/0x70 irq_thread+0xfc/0x14c kthread+0x11c/0x12c ret_from_fork+0x10/0x18 This busy loop naturally lends itself to using a wait queue so that each thread that tries to send a message will sleep waiting on the waitqueue and only be woken up when a free slot

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Doug Anderson (2020-07-24 13:31:39) > Hi, > > On Fri, Jul 24, 2020 at 1:27 PM Stephen Boyd wrote: > > > > Quoting Doug Anderson (2020-07-24 13:11:59) > > > > > > I wasn't suggesting adding a timeout. I was just saying that if > > > claim_tcs_for_req() were to ever return an error code

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Doug Anderson
Hi, On Fri, Jul 24, 2020 at 1:27 PM Stephen Boyd wrote: > > Quoting Doug Anderson (2020-07-24 13:11:59) > > > > I wasn't suggesting adding a timeout. I was just saying that if > > claim_tcs_for_req() were to ever return an error code other than > > -EBUSY that we'd need a check for it because

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Stephen Boyd
Quoting Doug Anderson (2020-07-24 13:11:59) > > I wasn't suggesting adding a timeout. I was just saying that if > claim_tcs_for_req() were to ever return an error code other than > -EBUSY that we'd need a check for it because otherwise we'd interpret > the result as a tcs_id. > Ok that sounds

Re: [PATCH] soc: qcom: rpmh-rsc: Sleep waiting for tcs slots to be free

2020-07-24 Thread Lina Iyer
On Fri, Jul 24 2020 at 14:11 -0600, Stephen Boyd wrote: Quoting Lina Iyer (2020-07-24 13:08:41) On Fri, Jul 24 2020 at 14:01 -0600, Stephen Boyd wrote: >Quoting Doug Anderson (2020-07-24 12:49:56) >> Hi, >> >> On Fri, Jul 24, 2020 at 12:44 PM Stephen Boyd wrote: >I think Lina was alluding to

  1   2   3   4   5   6   7   8   9   10   >