On Fri 2023-01-27 08:57:40, Seth Forshee wrote:
> On Fri, Jan 27, 2023 at 12:19:03PM +0100, Petr Mladek wrote:
> > Could you please provide some more details about the test system?
> > Is there anything important to make it reproducible?
> >
> > The following aspect
On Fri 2023-01-27 11:37:02, Peter Zijlstra wrote:
> On Thu, Jan 26, 2023 at 08:43:55PM -0800, Josh Poimboeuf wrote:
> > On Thu, Jan 26, 2023 at 03:12:35PM -0600, Seth Forshee (DigitalOcean) wrote:
> > > On Thu, Jan 26, 2023 at 06:03:16PM +0100, Petr Mladek wrote:
> > > &
On Thu 2023-01-26 15:12:35, Seth Forshee (DigitalOcean) wrote:
> On Thu, Jan 26, 2023 at 06:03:16PM +0100, Petr Mladek wrote:
> > On Fri 2023-01-20 16:12:20, Seth Forshee (DigitalOcean) wrote:
> > > We've fairly regularaly seen liveptches which cannot transition with
On Fri 2023-01-20 16:12:20, Seth Forshee (DigitalOcean) wrote:
> We've fairly regularaly seen liveptches which cannot transition within
> kpatch's
> timeout period due to busy vhost worker kthreads.
I have missed this detail. Miroslav told me that we have solved
something similar some time ago,
On Thu 2023-01-26 12:16:36, Petr Mladek wrote:
> On Wed 2023-01-25 10:57:30, Seth Forshee wrote:
> > On Wed, Jan 25, 2023 at 12:34:26PM +0100, Petr Mladek wrote:
> > > On Tue 2023-01-24 11:21:39, Seth Forshee wrote:
> > > > On Tue, Jan 24, 2023 at 03:17
On Wed 2023-01-25 10:57:30, Seth Forshee wrote:
> On Wed, Jan 25, 2023 at 12:34:26PM +0100, Petr Mladek wrote:
> > On Tue 2023-01-24 11:21:39, Seth Forshee wrote:
> > > On Tue, Jan 24, 2023 at 03:17:43PM +0100, Petr Mladek wrote:
> > > > On Fri 2023-01-20 16:12:
On Tue 2023-01-24 11:21:39, Seth Forshee wrote:
> On Tue, Jan 24, 2023 at 03:17:43PM +0100, Petr Mladek wrote:
> > On Fri 2023-01-20 16:12:22, Seth Forshee (DigitalOcean) wrote:
> > > Livepatch relies on stack checking of sleeping tasks to switch kthreads,
> > >
On Fri 2023-01-20 16:12:22, Seth Forshee (DigitalOcean) wrote:
> Livepatch relies on stack checking of sleeping tasks to switch kthreads,
> so a busy kthread can block a livepatch transition indefinitely. We've
> seen this happen fairly often with busy vhost kthreads.
To be precise, it would be
On Thu 2022-06-09 12:02:04, Peter Zijlstra wrote:
> On Thu, Jun 09, 2022 at 11:16:46AM +0200, Petr Mladek wrote:
> > On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> > > The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
> > > tracepoint"), w
On Thu 2022-06-09 20:30:58, Sergey Senozhatsky wrote:
> My emails are getting rejected... Let me try web-interface
Bad day for mail sending. I have problems as well ;-)
> Kudos to Petr for the questions and thanks to PeterZ for the answers.
>
> On Thu, Jun 9, 2022 at 7:02 PM Peter Zijlstra
Sending again. The previous attempt was rejected by several
recipients. It was caused by a mail server changes on my side.
I am sorry for spamming those who got the 1st mail already.
On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> The problem, per commit fc98c3c8c9dc ("printk: use rcuidle
On Wed 2022-06-08 16:27:47, Peter Zijlstra wrote:
> The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
> tracepoint"), was printk usage from the cpuidle path where RCU was
> already disabled.
>
> Per the patches earlier in this series, this is no longer the case.
My understanding
On Tue 2020-11-17 09:33:25, Steven Rostedt wrote:
> On Tue, 17 Nov 2020 12:23:41 +0200
> Leon Romanovsky wrote:
>
> > Hi,
> >
> > Approximately two weeks ago, our regression team started to experience those
> > netconsole splats. The tested code is Linus's master (-rc4) + netdev
> > net-next
>
On Wed 2018-05-23 12:54:15, Thomas Garnier wrote:
> When using -fPIE/PIC with function tracing, the compiler generates a
> call through the GOT (call *__fentry__@GOTPCREL). This instruction
> takes 6 bytes instead of 5 on the usual relative call.
>
> If PIE is enabled, replace the 6th byte of the
d be used in parallel with fill_balloon() and leak_balloon().
This patch splits the existing work into two pieces. One is for
updating the balloon stats. The other is for resizing of the balloon.
It seems that they can be proceed in parallel without any
extra locking.
Signed-off-by: Petr Mladek <pmla
be found
at http://thread.gmane.org/gmane.linux.kernel/2100306
Petr Mladek (2):
virtio_balloon: Use a workqueue instead of "vballoon" kthread
virtio_balloon: Allow to resize and update the balloon stats in
parallel
drivers/virtio/virtio_ballo
of the fact that it sleeps on failure: let's
> wake the config change handler and fill it asynchronously.
>
> Reported-by: Petr Mladek <pmla...@suse.com>
> Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
> ---
>
> I was unable to test this - for some reason my test V
On Sat 2016-01-02 23:36:03, Michael S. Tsirkin wrote:
> On Sat, Jan 02, 2016 at 06:43:16AM -0500, Tejun Heo wrote:
> > On Fri, Jan 01, 2016 at 12:18:17PM +0200, Michael S. Tsirkin wrote:
> > > > My initial idea was to use a dedicated workqueue. Michael S. Tsirkin
> > > > @@ -563,7 +534,7 @@ static
-by: Petr Mladek <pmla...@suse.com>
---
drivers/virtio/virtio_balloon.c | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 7efc32945810..d73a86db2490 100644
--- a/drivers/virtio/virtio_balloon.c
From: Petr Mladek <pmla...@suse.cz>
This patch moves the deferred work from the "vballoon" kthread into a
system freezable workqueue.
We do not need to maintain and run a dedicated kthread. Also the event
driven workqueues API makes the logic much easier. Especially, we do
not lo
because the code manipulates memory but it is not
used in the memory reclaim path.
+ initialize the work item before allocation the workqueue
JFYI, the discussion about the previous version can be found at
http://thread.gmane.org/gmane.linux.kernel.virtualization/23701
Petr Mladek (2
the system freezable workqueue instead. Tejun Heo confirmed that
the system workqueue has a pretty high concurrency level (256) by default.
Therefore we need not be afraid of too long blocking.
Signed-off-by: Petr Mladek pmla...@suse.cz
Acked-by: Michael S. Tsirkin m...@redhat.com
---
Changes
On Thu 2014-11-20 11:29:35, Tejun Heo wrote:
On Thu, Nov 20, 2014 at 06:26:24PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 20, 2014 at 06:25:43PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 20, 2014 at 11:07:46AM -0500, Tejun Heo wrote:
On Thu, Nov 20, 2014 at 05:03:17PM +0100, Petr
On Thu 2014-11-20 19:00:16, Michael S. Tsirkin wrote:
On Thu, Nov 20, 2014 at 05:55:58PM +0100, Petr Mladek wrote:
On Thu 2014-11-20 11:29:35, Tejun Heo wrote:
On Thu, Nov 20, 2014 at 06:26:24PM +0200, Michael S. Tsirkin wrote:
On Thu, Nov 20, 2014 at 06:25:43PM +0200, Michael S. Tsirkin
On Fri 2014-11-14 08:19:15, Tejun Heo wrote:
Hello, Michael, Petr.
On Wed, Nov 12, 2014 at 03:32:04PM +0200, Michael S. Tsirkin wrote:
+ /* The workqueue servicing the balloon. */
+ struct workqueue_struct *wq;
+ struct work_struct wq_work;
We could use system_freezable_wq
but it is not
used in the memory reclaim path.
+ initialize the work item before allocation the workqueue
Signed-off-by: Petr Mladek pmla...@suse.cz
---
drivers/virtio/virtio_balloon.c | 86 +++--
1 file changed, 39 insertions(+), 47 deletions(-)
diff --git
|
-
The 2nd way to create the workqueue has about the same results as kthread.
This is why it is used in the patch.
Signed-off-by: Petr Mladek pmla...@suse.cz
---
drivers/virtio/virtio_balloon.c | 96
27 matches
Mail list logo