[PATCH AUTOSEL 6.6 03/12] bpf: put uprobe link's path and task in release callback

2024-04-15 Thread Sasha Levin
From: Andrii Nakryiko 

[ Upstream commit e9c856cabefb71d47b2eeb197f72c9c88e9b45b0 ]

There is no need to delay putting either path or task to deallocation
step. It can be done right after bpf_uprobe_unregister. Between release
and dealloc, there could be still some running BPF programs, but they
don't access either task or path, only data in link->uprobes, so it is
safe to do.

On the other hand, doing path_put() in dealloc callback makes this
dealloc sleepable because path_put() itself might sleep. Which is
problematic due to the need to call uprobe's dealloc through call_rcu(),
which is what is done in the next bug fix patch. So solve the problem by
releasing these resources early.

Signed-off-by: Andrii Nakryiko 
Link: https://lore.kernel.org/r/20240328052426.3042617-1-and...@kernel.org
Signed-off-by: Alexei Starovoitov 
Signed-off-by: Sasha Levin 
---
 kernel/trace/bpf_trace.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 1d76f3b014aee..4d49a9f47e688 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3065,6 +3065,9 @@ static void bpf_uprobe_multi_link_release(struct bpf_link 
*link)
 
umulti_link = container_of(link, struct bpf_uprobe_multi_link, link);
bpf_uprobe_unregister(_link->path, umulti_link->uprobes, 
umulti_link->cnt);
+   if (umulti_link->task)
+   put_task_struct(umulti_link->task);
+   path_put(_link->path);
 }
 
 static void bpf_uprobe_multi_link_dealloc(struct bpf_link *link)
@@ -3072,9 +3075,6 @@ static void bpf_uprobe_multi_link_dealloc(struct bpf_link 
*link)
struct bpf_uprobe_multi_link *umulti_link;
 
umulti_link = container_of(link, struct bpf_uprobe_multi_link, link);
-   if (umulti_link->task)
-   put_task_struct(umulti_link->task);
-   path_put(_link->path);
kvfree(umulti_link->uprobes);
kfree(umulti_link);
 }
-- 
2.43.0




[PATCH AUTOSEL 6.8 03/15] bpf: put uprobe link's path and task in release callback

2024-04-15 Thread Sasha Levin
From: Andrii Nakryiko 

[ Upstream commit e9c856cabefb71d47b2eeb197f72c9c88e9b45b0 ]

There is no need to delay putting either path or task to deallocation
step. It can be done right after bpf_uprobe_unregister. Between release
and dealloc, there could be still some running BPF programs, but they
don't access either task or path, only data in link->uprobes, so it is
safe to do.

On the other hand, doing path_put() in dealloc callback makes this
dealloc sleepable because path_put() itself might sleep. Which is
problematic due to the need to call uprobe's dealloc through call_rcu(),
which is what is done in the next bug fix patch. So solve the problem by
releasing these resources early.

Signed-off-by: Andrii Nakryiko 
Link: https://lore.kernel.org/r/20240328052426.3042617-1-and...@kernel.org
Signed-off-by: Alexei Starovoitov 
Signed-off-by: Sasha Levin 
---
 kernel/trace/bpf_trace.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 7ac6c52b25ebc..45de8a4923e21 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -3142,6 +3142,9 @@ static void bpf_uprobe_multi_link_release(struct bpf_link 
*link)
 
umulti_link = container_of(link, struct bpf_uprobe_multi_link, link);
bpf_uprobe_unregister(_link->path, umulti_link->uprobes, 
umulti_link->cnt);
+   if (umulti_link->task)
+   put_task_struct(umulti_link->task);
+   path_put(_link->path);
 }
 
 static void bpf_uprobe_multi_link_dealloc(struct bpf_link *link)
@@ -3149,9 +3152,6 @@ static void bpf_uprobe_multi_link_dealloc(struct bpf_link 
*link)
struct bpf_uprobe_multi_link *umulti_link;
 
umulti_link = container_of(link, struct bpf_uprobe_multi_link, link);
-   if (umulti_link->task)
-   put_task_struct(umulti_link->task);
-   path_put(_link->path);
kvfree(umulti_link->uprobes);
kfree(umulti_link);
 }
-- 
2.43.0




[PATCH AUTOSEL 5.10 31/31] ring-buffer: use READ_ONCE() to read cpu_buffer->commit_page in concurrent environment

2024-03-29 Thread Sasha Levin
From: linke li 

[ Upstream commit f1e30cb6369251c03f63c564006f96a54197dcc4 ]

In function ring_buffer_iter_empty(), cpu_buffer->commit_page is read
while other threads may change it. It may cause the time_stamp that read
in the next line come from a different page. Use READ_ONCE() to avoid
having to reason about compiler optimizations now and in future.

Link: 
https://lore.kernel.org/linux-trace-kernel/tencent_dff7d3561a0686b5e8fc079150a025051...@qq.com

Cc: Masami Hiramatsu 
Cc: Mathieu Desnoyers 
Signed-off-by: linke li 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 4a43b8846b49f..70b6cb6bfb56e 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4184,7 +4184,7 @@ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
cpu_buffer = iter->cpu_buffer;
reader = cpu_buffer->reader_page;
head_page = cpu_buffer->head_page;
-   commit_page = cpu_buffer->commit_page;
+   commit_page = READ_ONCE(cpu_buffer->commit_page);
commit_ts = commit_page->page->time_stamp;
 
/*
-- 
2.43.0




[PATCH AUTOSEL 5.15 34/34] ring-buffer: use READ_ONCE() to read cpu_buffer->commit_page in concurrent environment

2024-03-29 Thread Sasha Levin
From: linke li 

[ Upstream commit f1e30cb6369251c03f63c564006f96a54197dcc4 ]

In function ring_buffer_iter_empty(), cpu_buffer->commit_page is read
while other threads may change it. It may cause the time_stamp that read
in the next line come from a different page. Use READ_ONCE() to avoid
having to reason about compiler optimizations now and in future.

Link: 
https://lore.kernel.org/linux-trace-kernel/tencent_dff7d3561a0686b5e8fc079150a025051...@qq.com

Cc: Masami Hiramatsu 
Cc: Mathieu Desnoyers 
Signed-off-by: linke li 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index d9bed77f96c1f..2b46c66fb132d 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4343,7 +4343,7 @@ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
cpu_buffer = iter->cpu_buffer;
reader = cpu_buffer->reader_page;
head_page = cpu_buffer->head_page;
-   commit_page = cpu_buffer->commit_page;
+   commit_page = READ_ONCE(cpu_buffer->commit_page);
commit_ts = commit_page->page->time_stamp;
 
/*
-- 
2.43.0




[PATCH AUTOSEL 6.1 52/52] ring-buffer: use READ_ONCE() to read cpu_buffer->commit_page in concurrent environment

2024-03-29 Thread Sasha Levin
From: linke li 

[ Upstream commit f1e30cb6369251c03f63c564006f96a54197dcc4 ]

In function ring_buffer_iter_empty(), cpu_buffer->commit_page is read
while other threads may change it. It may cause the time_stamp that read
in the next line come from a different page. Use READ_ONCE() to avoid
having to reason about compiler optimizations now and in future.

Link: 
https://lore.kernel.org/linux-trace-kernel/tencent_dff7d3561a0686b5e8fc079150a025051...@qq.com

Cc: Masami Hiramatsu 
Cc: Mathieu Desnoyers 
Signed-off-by: linke li 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index e019a9278794f..7ed92f311dc9b 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4384,7 +4384,7 @@ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
cpu_buffer = iter->cpu_buffer;
reader = cpu_buffer->reader_page;
head_page = cpu_buffer->head_page;
-   commit_page = cpu_buffer->commit_page;
+   commit_page = READ_ONCE(cpu_buffer->commit_page);
commit_ts = commit_page->page->time_stamp;
 
/*
-- 
2.43.0




[PATCH AUTOSEL 6.6 75/75] ring-buffer: use READ_ONCE() to read cpu_buffer->commit_page in concurrent environment

2024-03-29 Thread Sasha Levin
From: linke li 

[ Upstream commit f1e30cb6369251c03f63c564006f96a54197dcc4 ]

In function ring_buffer_iter_empty(), cpu_buffer->commit_page is read
while other threads may change it. It may cause the time_stamp that read
in the next line come from a different page. Use READ_ONCE() to avoid
having to reason about compiler optimizations now and in future.

Link: 
https://lore.kernel.org/linux-trace-kernel/tencent_dff7d3561a0686b5e8fc079150a025051...@qq.com

Cc: Masami Hiramatsu 
Cc: Mathieu Desnoyers 
Signed-off-by: linke li 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 1ac6637895a44..0d98e847fd6c2 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4389,7 +4389,7 @@ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
cpu_buffer = iter->cpu_buffer;
reader = cpu_buffer->reader_page;
head_page = cpu_buffer->head_page;
-   commit_page = cpu_buffer->commit_page;
+   commit_page = READ_ONCE(cpu_buffer->commit_page);
commit_ts = commit_page->page->time_stamp;
 
/*
-- 
2.43.0




[PATCH AUTOSEL 6.8 98/98] ring-buffer: use READ_ONCE() to read cpu_buffer->commit_page in concurrent environment

2024-03-29 Thread Sasha Levin
From: linke li 

[ Upstream commit f1e30cb6369251c03f63c564006f96a54197dcc4 ]

In function ring_buffer_iter_empty(), cpu_buffer->commit_page is read
while other threads may change it. It may cause the time_stamp that read
in the next line come from a different page. Use READ_ONCE() to avoid
having to reason about compiler optimizations now and in future.

Link: 
https://lore.kernel.org/linux-trace-kernel/tencent_dff7d3561a0686b5e8fc079150a025051...@qq.com

Cc: Masami Hiramatsu 
Cc: Mathieu Desnoyers 
Signed-off-by: linke li 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index aa332ace108b1..54410c8cacbe8 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -4350,7 +4350,7 @@ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
cpu_buffer = iter->cpu_buffer;
reader = cpu_buffer->reader_page;
head_page = cpu_buffer->head_page;
-   commit_page = cpu_buffer->commit_page;
+   commit_page = READ_ONCE(cpu_buffer->commit_page);
commit_ts = commit_page->page->time_stamp;
 
/*
-- 
2.43.0




FAILED: Patch "virtio: reenable config if freezing device failed" failed to apply to 4.19-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 4.19-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 310227f42882c52356b523e2f4e11690eebcd2ab Mon Sep 17 00:00:00 2001
From: David Hildenbrand 
Date: Tue, 13 Feb 2024 14:54:25 +0100
Subject: [PATCH] virtio: reenable config if freezing device failed

Currently, we don't reenable the config if freezing the device failed.

For example, virtio-mem currently doesn't support suspend+resume, and
trying to freeze the device will always fail. Afterwards, the device
will no longer respond to resize requests, because it won't get notified
about config changes.

Let's fix this by re-enabling the config if freezing fails.

Fixes: 22b7050a024d ("virtio: defer config changed notifications")
Cc: 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: Xuan Zhuo 
Signed-off-by: David Hildenbrand 
Message-Id: <20240213135425.795001-1-da...@redhat.com>
Signed-off-by: Michael S. Tsirkin 
---
 drivers/virtio/virtio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index f4080692b3513..f513ee21b1c18 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -510,8 +510,10 @@ int virtio_device_freeze(struct virtio_device *dev)
 
if (drv && drv->freeze) {
ret = drv->freeze(dev);
-   if (ret)
+   if (ret) {
+   virtio_config_enable(dev);
return ret;
+   }
}
 
if (dev->config->destroy_avq)
-- 
2.43.0







FAILED: Patch "ring-buffer: Do not set shortest_full when full target is hit" failed to apply to 5.4-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 5.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 761d9473e27f0c8782895013a3e7b52a37c8bcfc Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (Google)" 
Date: Tue, 12 Mar 2024 11:56:41 -0400
Subject: [PATCH] ring-buffer: Do not set shortest_full when full target is hit

The rb_watermark_hit() checks if the amount of data in the ring buffer is
above the percentage level passed in by the "full" variable. If it is, it
returns true.

But it also sets the "shortest_full" field of the cpu_buffer that informs
writers that it needs to call the irq_work if the amount of data on the
ring buffer is above the requested amount.

The rb_watermark_hit() always sets the shortest_full even if the amount in
the ring buffer is what it wants. As it is not going to wait, because it
has what it wants, there's no reason to set shortest_full.

Link: 
https://lore.kernel.org/linux-trace-kernel/20240312115641.6aa8b...@gandalf.local.home

Cc: sta...@vger.kernel.org
Cc: Mathieu Desnoyers 
Fixes: 42fb0a1e84ff5 ("tracing/ring-buffer: Have polling block on watermark")
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
---
 kernel/trace/ring_buffer.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index aa332ace108b1..6ffbccb9bcf00 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -834,9 +834,10 @@ static bool rb_watermark_hit(struct trace_buffer *buffer, 
int cpu, int full)
pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page;
ret = !pagebusy && full_hit(buffer, cpu, full);
 
-   if (!cpu_buffer->shortest_full ||
-   cpu_buffer->shortest_full > full)
-   cpu_buffer->shortest_full = full;
+   if (!ret && (!cpu_buffer->shortest_full ||
+cpu_buffer->shortest_full > full)) {
+   cpu_buffer->shortest_full = full;
+   }
raw_spin_unlock_irqrestore(_buffer->reader_lock, flags);
}
return ret;
-- 
2.43.0







FAILED: Patch "virtio: reenable config if freezing device failed" failed to apply to 5.4-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 5.4-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 310227f42882c52356b523e2f4e11690eebcd2ab Mon Sep 17 00:00:00 2001
From: David Hildenbrand 
Date: Tue, 13 Feb 2024 14:54:25 +0100
Subject: [PATCH] virtio: reenable config if freezing device failed

Currently, we don't reenable the config if freezing the device failed.

For example, virtio-mem currently doesn't support suspend+resume, and
trying to freeze the device will always fail. Afterwards, the device
will no longer respond to resize requests, because it won't get notified
about config changes.

Let's fix this by re-enabling the config if freezing fails.

Fixes: 22b7050a024d ("virtio: defer config changed notifications")
Cc: 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: Xuan Zhuo 
Signed-off-by: David Hildenbrand 
Message-Id: <20240213135425.795001-1-da...@redhat.com>
Signed-off-by: Michael S. Tsirkin 
---
 drivers/virtio/virtio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index f4080692b3513..f513ee21b1c18 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -510,8 +510,10 @@ int virtio_device_freeze(struct virtio_device *dev)
 
if (drv && drv->freeze) {
ret = drv->freeze(dev);
-   if (ret)
+   if (ret) {
+   virtio_config_enable(dev);
return ret;
+   }
}
 
if (dev->config->destroy_avq)
-- 
2.43.0







FAILED: Patch "tracing/ring-buffer: Fix wait_on_pipe() race" failed to apply to 5.15-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 5.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 2aa043a55b9a764c9cbde5a8c654eeaaffe224cf Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (Google)" 
Date: Tue, 12 Mar 2024 08:15:08 -0400
Subject: [PATCH] tracing/ring-buffer: Fix wait_on_pipe() race

When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.

 CPU 0  CPU 1
 -  -
   wait_index++;
   index = wait_index;
   ring_buffer_wake_waiters();
   wait_on_pipe()
 ring_buffer_wait();

The ring_buffer_wait() will miss the wakeup from CPU 1. The problem is
that the ring_buffer_wait() needs the logic of:

prepare_to_wait();
if (!condition)
schedule();

Where the missing condition check is the iter->wait_index update.

Have the ring_buffer_wait() take a conditional callback function and a
data parameter that can be used within the wait_event_interruptible() of
the ring_buffer_wait() function.

In wait_on_pipe(), pass a condition function that will check if the
wait_index has been updated, if it has, it will return true to break out
of the wait_event_interruptible() loop.

Create a new field "closed" in the trace_iterator and set it in the
.flush() callback before calling ring_buffer_wake_waiters().
This will keep any new readers from waiting on a closed file descriptor.

Have the wait_on_pipe() condition callback also check the closed field.

Change the wait_index field of the trace_iterator to atomic_t. There's no
reason it needs to be 'long' and making it atomic and using
atomic_read_acquire() and atomic_fetch_inc_release() will provide the
necessary memory barriers.

Add a "woken" flag to tracing_buffers_splice_read() to exit the loop after
one more try to fetch data. That is, if it waited for data and something
woke it up, it should try to collect any new data and then exit back to
user space.

Link: 
https://lore.kernel.org/linux-trace-kernel/CAHk-=wgsngewhfxzajiaqznwpmqetqmi1waes2o1v6l4c_u...@mail.gmail.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20240312121703.557950...@goodmis.org

Cc: sta...@vger.kernel.org
Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: linke li 
Cc: Rabin Vincent 
Fixes: f3ddb74ad0790 ("tracing: Wake up ring buffer waiters on closing of the 
file")
Signed-off-by: Steven Rostedt (Google) 
---
 include/linux/ring_buffer.h  |  3 ++-
 include/linux/trace_events.h |  5 -
 kernel/trace/ring_buffer.c   | 13 ++-
 kernel/trace/trace.c | 43 ++--
 4 files changed, 45 insertions(+), 19 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 338a33db1577e..dc5ae4e96aee0 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -99,7 +99,8 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, 
struct lock_class_key *k
 })
 
 typedef bool (*ring_buffer_cond_fn)(void *data);
-int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full,
+ring_buffer_cond_fn cond, void *data);
 __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
  struct file *filp, poll_table *poll_table, int full);
 void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d68ff9b1247f9..fc6d0af56bb17 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -103,13 +103,16 @@ struct trace_iterator {
unsigned inttemp_size;
char*fmt;   /* modified format holder */
unsigned intfmt_size;
-   longwait_index;
+   atomic_twait_index;
 
/* trace_seq for __print_flags() and __print_symbolic() etc. */
struct trace_seqtmp_seq;
 
cpumask_var_t   started;
 
+   /* Set when the file is closed to prevent new waiters */
+   boolclosed;
+
/* it's true when current open file is snapshot */
boolsnapshot;
 
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f4c34b7c7e1e7..350607cce8694 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -902,23 +902,26 @@ static bool rb_wait_once(void *data)
  * @buffer: buffer to wait on
  * @cpu: the cpu buffer to wait on

FAILED: Patch "virtio: reenable config if freezing device failed" failed to apply to 5.10-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 5.10-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 310227f42882c52356b523e2f4e11690eebcd2ab Mon Sep 17 00:00:00 2001
From: David Hildenbrand 
Date: Tue, 13 Feb 2024 14:54:25 +0100
Subject: [PATCH] virtio: reenable config if freezing device failed

Currently, we don't reenable the config if freezing the device failed.

For example, virtio-mem currently doesn't support suspend+resume, and
trying to freeze the device will always fail. Afterwards, the device
will no longer respond to resize requests, because it won't get notified
about config changes.

Let's fix this by re-enabling the config if freezing fails.

Fixes: 22b7050a024d ("virtio: defer config changed notifications")
Cc: 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: Xuan Zhuo 
Signed-off-by: David Hildenbrand 
Message-Id: <20240213135425.795001-1-da...@redhat.com>
Signed-off-by: Michael S. Tsirkin 
---
 drivers/virtio/virtio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index f4080692b3513..f513ee21b1c18 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -510,8 +510,10 @@ int virtio_device_freeze(struct virtio_device *dev)
 
if (drv && drv->freeze) {
ret = drv->freeze(dev);
-   if (ret)
+   if (ret) {
+   virtio_config_enable(dev);
return ret;
+   }
}
 
if (dev->config->destroy_avq)
-- 
2.43.0







FAILED: Patch "virtio: reenable config if freezing device failed" failed to apply to 5.15-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 5.15-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 310227f42882c52356b523e2f4e11690eebcd2ab Mon Sep 17 00:00:00 2001
From: David Hildenbrand 
Date: Tue, 13 Feb 2024 14:54:25 +0100
Subject: [PATCH] virtio: reenable config if freezing device failed

Currently, we don't reenable the config if freezing the device failed.

For example, virtio-mem currently doesn't support suspend+resume, and
trying to freeze the device will always fail. Afterwards, the device
will no longer respond to resize requests, because it won't get notified
about config changes.

Let's fix this by re-enabling the config if freezing fails.

Fixes: 22b7050a024d ("virtio: defer config changed notifications")
Cc: 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: Xuan Zhuo 
Signed-off-by: David Hildenbrand 
Message-Id: <20240213135425.795001-1-da...@redhat.com>
Signed-off-by: Michael S. Tsirkin 
---
 drivers/virtio/virtio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index f4080692b3513..f513ee21b1c18 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -510,8 +510,10 @@ int virtio_device_freeze(struct virtio_device *dev)
 
if (drv && drv->freeze) {
ret = drv->freeze(dev);
-   if (ret)
+   if (ret) {
+   virtio_config_enable(dev);
return ret;
+   }
}
 
if (dev->config->destroy_avq)
-- 
2.43.0







FAILED: Patch "tracing/ring-buffer: Fix wait_on_pipe() race" failed to apply to 6.1-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 6.1-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 2aa043a55b9a764c9cbde5a8c654eeaaffe224cf Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (Google)" 
Date: Tue, 12 Mar 2024 08:15:08 -0400
Subject: [PATCH] tracing/ring-buffer: Fix wait_on_pipe() race

When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.

 CPU 0  CPU 1
 -  -
   wait_index++;
   index = wait_index;
   ring_buffer_wake_waiters();
   wait_on_pipe()
 ring_buffer_wait();

The ring_buffer_wait() will miss the wakeup from CPU 1. The problem is
that the ring_buffer_wait() needs the logic of:

prepare_to_wait();
if (!condition)
schedule();

Where the missing condition check is the iter->wait_index update.

Have the ring_buffer_wait() take a conditional callback function and a
data parameter that can be used within the wait_event_interruptible() of
the ring_buffer_wait() function.

In wait_on_pipe(), pass a condition function that will check if the
wait_index has been updated, if it has, it will return true to break out
of the wait_event_interruptible() loop.

Create a new field "closed" in the trace_iterator and set it in the
.flush() callback before calling ring_buffer_wake_waiters().
This will keep any new readers from waiting on a closed file descriptor.

Have the wait_on_pipe() condition callback also check the closed field.

Change the wait_index field of the trace_iterator to atomic_t. There's no
reason it needs to be 'long' and making it atomic and using
atomic_read_acquire() and atomic_fetch_inc_release() will provide the
necessary memory barriers.

Add a "woken" flag to tracing_buffers_splice_read() to exit the loop after
one more try to fetch data. That is, if it waited for data and something
woke it up, it should try to collect any new data and then exit back to
user space.

Link: 
https://lore.kernel.org/linux-trace-kernel/CAHk-=wgsngewhfxzajiaqznwpmqetqmi1waes2o1v6l4c_u...@mail.gmail.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20240312121703.557950...@goodmis.org

Cc: sta...@vger.kernel.org
Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: linke li 
Cc: Rabin Vincent 
Fixes: f3ddb74ad0790 ("tracing: Wake up ring buffer waiters on closing of the 
file")
Signed-off-by: Steven Rostedt (Google) 
---
 include/linux/ring_buffer.h  |  3 ++-
 include/linux/trace_events.h |  5 -
 kernel/trace/ring_buffer.c   | 13 ++-
 kernel/trace/trace.c | 43 ++--
 4 files changed, 45 insertions(+), 19 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 338a33db1577e..dc5ae4e96aee0 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -99,7 +99,8 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, 
struct lock_class_key *k
 })
 
 typedef bool (*ring_buffer_cond_fn)(void *data);
-int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full,
+ring_buffer_cond_fn cond, void *data);
 __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
  struct file *filp, poll_table *poll_table, int full);
 void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d68ff9b1247f9..fc6d0af56bb17 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -103,13 +103,16 @@ struct trace_iterator {
unsigned inttemp_size;
char*fmt;   /* modified format holder */
unsigned intfmt_size;
-   longwait_index;
+   atomic_twait_index;
 
/* trace_seq for __print_flags() and __print_symbolic() etc. */
struct trace_seqtmp_seq;
 
cpumask_var_t   started;
 
+   /* Set when the file is closed to prevent new waiters */
+   boolclosed;
+
/* it's true when current open file is snapshot */
boolsnapshot;
 
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f4c34b7c7e1e7..350607cce8694 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -902,23 +902,26 @@ static bool rb_wait_once(void *data)
  * @buffer: buffer to wait on
  * @cpu: the cpu buffer to wait on
 

FAILED: Patch "virtio: reenable config if freezing device failed" failed to apply to 6.1-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 6.1-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 310227f42882c52356b523e2f4e11690eebcd2ab Mon Sep 17 00:00:00 2001
From: David Hildenbrand 
Date: Tue, 13 Feb 2024 14:54:25 +0100
Subject: [PATCH] virtio: reenable config if freezing device failed

Currently, we don't reenable the config if freezing the device failed.

For example, virtio-mem currently doesn't support suspend+resume, and
trying to freeze the device will always fail. Afterwards, the device
will no longer respond to resize requests, because it won't get notified
about config changes.

Let's fix this by re-enabling the config if freezing fails.

Fixes: 22b7050a024d ("virtio: defer config changed notifications")
Cc: 
Cc: "Michael S. Tsirkin" 
Cc: Jason Wang 
Cc: Xuan Zhuo 
Signed-off-by: David Hildenbrand 
Message-Id: <20240213135425.795001-1-da...@redhat.com>
Signed-off-by: Michael S. Tsirkin 
---
 drivers/virtio/virtio.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio.c b/drivers/virtio/virtio.c
index f4080692b3513..f513ee21b1c18 100644
--- a/drivers/virtio/virtio.c
+++ b/drivers/virtio/virtio.c
@@ -510,8 +510,10 @@ int virtio_device_freeze(struct virtio_device *dev)
 
if (drv && drv->freeze) {
ret = drv->freeze(dev);
-   if (ret)
+   if (ret) {
+   virtio_config_enable(dev);
return ret;
+   }
}
 
if (dev->config->destroy_avq)
-- 
2.43.0







FAILED: Patch "tracing/ring-buffer: Fix wait_on_pipe() race" failed to apply to 6.6-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 6.6-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 2aa043a55b9a764c9cbde5a8c654eeaaffe224cf Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (Google)" 
Date: Tue, 12 Mar 2024 08:15:08 -0400
Subject: [PATCH] tracing/ring-buffer: Fix wait_on_pipe() race

When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.

 CPU 0  CPU 1
 -  -
   wait_index++;
   index = wait_index;
   ring_buffer_wake_waiters();
   wait_on_pipe()
 ring_buffer_wait();

The ring_buffer_wait() will miss the wakeup from CPU 1. The problem is
that the ring_buffer_wait() needs the logic of:

prepare_to_wait();
if (!condition)
schedule();

Where the missing condition check is the iter->wait_index update.

Have the ring_buffer_wait() take a conditional callback function and a
data parameter that can be used within the wait_event_interruptible() of
the ring_buffer_wait() function.

In wait_on_pipe(), pass a condition function that will check if the
wait_index has been updated, if it has, it will return true to break out
of the wait_event_interruptible() loop.

Create a new field "closed" in the trace_iterator and set it in the
.flush() callback before calling ring_buffer_wake_waiters().
This will keep any new readers from waiting on a closed file descriptor.

Have the wait_on_pipe() condition callback also check the closed field.

Change the wait_index field of the trace_iterator to atomic_t. There's no
reason it needs to be 'long' and making it atomic and using
atomic_read_acquire() and atomic_fetch_inc_release() will provide the
necessary memory barriers.

Add a "woken" flag to tracing_buffers_splice_read() to exit the loop after
one more try to fetch data. That is, if it waited for data and something
woke it up, it should try to collect any new data and then exit back to
user space.

Link: 
https://lore.kernel.org/linux-trace-kernel/CAHk-=wgsngewhfxzajiaqznwpmqetqmi1waes2o1v6l4c_u...@mail.gmail.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20240312121703.557950...@goodmis.org

Cc: sta...@vger.kernel.org
Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: linke li 
Cc: Rabin Vincent 
Fixes: f3ddb74ad0790 ("tracing: Wake up ring buffer waiters on closing of the 
file")
Signed-off-by: Steven Rostedt (Google) 
---
 include/linux/ring_buffer.h  |  3 ++-
 include/linux/trace_events.h |  5 -
 kernel/trace/ring_buffer.c   | 13 ++-
 kernel/trace/trace.c | 43 ++--
 4 files changed, 45 insertions(+), 19 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 338a33db1577e..dc5ae4e96aee0 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -99,7 +99,8 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, 
struct lock_class_key *k
 })
 
 typedef bool (*ring_buffer_cond_fn)(void *data);
-int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full,
+ring_buffer_cond_fn cond, void *data);
 __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
  struct file *filp, poll_table *poll_table, int full);
 void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d68ff9b1247f9..fc6d0af56bb17 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -103,13 +103,16 @@ struct trace_iterator {
unsigned inttemp_size;
char*fmt;   /* modified format holder */
unsigned intfmt_size;
-   longwait_index;
+   atomic_twait_index;
 
/* trace_seq for __print_flags() and __print_symbolic() etc. */
struct trace_seqtmp_seq;
 
cpumask_var_t   started;
 
+   /* Set when the file is closed to prevent new waiters */
+   boolclosed;
+
/* it's true when current open file is snapshot */
boolsnapshot;
 
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f4c34b7c7e1e7..350607cce8694 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -902,23 +902,26 @@ static bool rb_wait_once(void *data)
  * @buffer: buffer to wait on
  * @cpu: the cpu buffer to wait on
 

FAILED: Patch "tracing/ring-buffer: Fix wait_on_pipe() race" failed to apply to 6.7-stable tree

2024-03-27 Thread Sasha Levin
The patch below does not apply to the 6.7-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to .

Thanks,
Sasha

-- original commit in Linus's tree --

>From 2aa043a55b9a764c9cbde5a8c654eeaaffe224cf Mon Sep 17 00:00:00 2001
From: "Steven Rostedt (Google)" 
Date: Tue, 12 Mar 2024 08:15:08 -0400
Subject: [PATCH] tracing/ring-buffer: Fix wait_on_pipe() race

When the trace_pipe_raw file is closed, there should be no new readers on
the file descriptor. This is mostly handled with the waking and wait_index
fields of the iterator. But there's still a slight race.

 CPU 0  CPU 1
 -  -
   wait_index++;
   index = wait_index;
   ring_buffer_wake_waiters();
   wait_on_pipe()
 ring_buffer_wait();

The ring_buffer_wait() will miss the wakeup from CPU 1. The problem is
that the ring_buffer_wait() needs the logic of:

prepare_to_wait();
if (!condition)
schedule();

Where the missing condition check is the iter->wait_index update.

Have the ring_buffer_wait() take a conditional callback function and a
data parameter that can be used within the wait_event_interruptible() of
the ring_buffer_wait() function.

In wait_on_pipe(), pass a condition function that will check if the
wait_index has been updated, if it has, it will return true to break out
of the wait_event_interruptible() loop.

Create a new field "closed" in the trace_iterator and set it in the
.flush() callback before calling ring_buffer_wake_waiters().
This will keep any new readers from waiting on a closed file descriptor.

Have the wait_on_pipe() condition callback also check the closed field.

Change the wait_index field of the trace_iterator to atomic_t. There's no
reason it needs to be 'long' and making it atomic and using
atomic_read_acquire() and atomic_fetch_inc_release() will provide the
necessary memory barriers.

Add a "woken" flag to tracing_buffers_splice_read() to exit the loop after
one more try to fetch data. That is, if it waited for data and something
woke it up, it should try to collect any new data and then exit back to
user space.

Link: 
https://lore.kernel.org/linux-trace-kernel/CAHk-=wgsngewhfxzajiaqznwpmqetqmi1waes2o1v6l4c_u...@mail.gmail.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20240312121703.557950...@goodmis.org

Cc: sta...@vger.kernel.org
Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Cc: Andrew Morton 
Cc: Linus Torvalds 
Cc: linke li 
Cc: Rabin Vincent 
Fixes: f3ddb74ad0790 ("tracing: Wake up ring buffer waiters on closing of the 
file")
Signed-off-by: Steven Rostedt (Google) 
---
 include/linux/ring_buffer.h  |  3 ++-
 include/linux/trace_events.h |  5 -
 kernel/trace/ring_buffer.c   | 13 ++-
 kernel/trace/trace.c | 43 ++--
 4 files changed, 45 insertions(+), 19 deletions(-)

diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h
index 338a33db1577e..dc5ae4e96aee0 100644
--- a/include/linux/ring_buffer.h
+++ b/include/linux/ring_buffer.h
@@ -99,7 +99,8 @@ __ring_buffer_alloc(unsigned long size, unsigned flags, 
struct lock_class_key *k
 })
 
 typedef bool (*ring_buffer_cond_fn)(void *data);
-int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full);
+int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full,
+ring_buffer_cond_fn cond, void *data);
 __poll_t ring_buffer_poll_wait(struct trace_buffer *buffer, int cpu,
  struct file *filp, poll_table *poll_table, int full);
 void ring_buffer_wake_waiters(struct trace_buffer *buffer, int cpu);
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index d68ff9b1247f9..fc6d0af56bb17 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -103,13 +103,16 @@ struct trace_iterator {
unsigned inttemp_size;
char*fmt;   /* modified format holder */
unsigned intfmt_size;
-   longwait_index;
+   atomic_twait_index;
 
/* trace_seq for __print_flags() and __print_symbolic() etc. */
struct trace_seqtmp_seq;
 
cpumask_var_t   started;
 
+   /* Set when the file is closed to prevent new waiters */
+   boolclosed;
+
/* it's true when current open file is snapshot */
boolsnapshot;
 
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f4c34b7c7e1e7..350607cce8694 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -902,23 +902,26 @@ static bool rb_wait_once(void *data)
  * @buffer: buffer to wait on
  * @cpu: the cpu buffer to wait on
 

[PATCH AUTOSEL 5.4 4/6] parisc/ftrace: add missing CONFIG_DYNAMIC_FTRACE check

2024-02-29 Thread Sasha Levin
From: Max Kellermann 

[ Upstream commit 250f5402e636a5cec9e0e95df252c3d54307210f ]

Fixes a bug revealed by -Wmissing-prototypes when
CONFIG_FUNCTION_GRAPH_TRACER is enabled but not CONFIG_DYNAMIC_FTRACE:

 arch/parisc/kernel/ftrace.c:82:5: error: no previous prototype for 
'ftrace_enable_ftrace_graph_caller' [-Werror=missing-prototypes]
82 | int ftrace_enable_ftrace_graph_caller(void)
   | ^
 arch/parisc/kernel/ftrace.c:88:5: error: no previous prototype for 
'ftrace_disable_ftrace_graph_caller' [-Werror=missing-prototypes]
88 | int ftrace_disable_ftrace_graph_caller(void)
   | ^~

Signed-off-by: Max Kellermann 
Signed-off-by: Helge Deller 
Signed-off-by: Sasha Levin 
---
 arch/parisc/kernel/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index b836fc61a24f4..f3a5c5e480cf0 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -80,7 +80,7 @@ void notrace __hot ftrace_function_trampoline(unsigned long 
parent,
 #endif
 }
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
 int ftrace_enable_ftrace_graph_caller(void)
 {
return 0;
-- 
2.43.0




[PATCH AUTOSEL 5.10 6/8] parisc/ftrace: add missing CONFIG_DYNAMIC_FTRACE check

2024-02-29 Thread Sasha Levin
From: Max Kellermann 

[ Upstream commit 250f5402e636a5cec9e0e95df252c3d54307210f ]

Fixes a bug revealed by -Wmissing-prototypes when
CONFIG_FUNCTION_GRAPH_TRACER is enabled but not CONFIG_DYNAMIC_FTRACE:

 arch/parisc/kernel/ftrace.c:82:5: error: no previous prototype for 
'ftrace_enable_ftrace_graph_caller' [-Werror=missing-prototypes]
82 | int ftrace_enable_ftrace_graph_caller(void)
   | ^
 arch/parisc/kernel/ftrace.c:88:5: error: no previous prototype for 
'ftrace_disable_ftrace_graph_caller' [-Werror=missing-prototypes]
88 | int ftrace_disable_ftrace_graph_caller(void)
   | ^~

Signed-off-by: Max Kellermann 
Signed-off-by: Helge Deller 
Signed-off-by: Sasha Levin 
---
 arch/parisc/kernel/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index 63e3ecb9da812..8538425cc43e0 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -81,7 +81,7 @@ void notrace __hot ftrace_function_trampoline(unsigned long 
parent,
 #endif
 }
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
 int ftrace_enable_ftrace_graph_caller(void)
 {
return 0;
-- 
2.43.0




[PATCH AUTOSEL 5.15 7/9] parisc/ftrace: add missing CONFIG_DYNAMIC_FTRACE check

2024-02-29 Thread Sasha Levin
From: Max Kellermann 

[ Upstream commit 250f5402e636a5cec9e0e95df252c3d54307210f ]

Fixes a bug revealed by -Wmissing-prototypes when
CONFIG_FUNCTION_GRAPH_TRACER is enabled but not CONFIG_DYNAMIC_FTRACE:

 arch/parisc/kernel/ftrace.c:82:5: error: no previous prototype for 
'ftrace_enable_ftrace_graph_caller' [-Werror=missing-prototypes]
82 | int ftrace_enable_ftrace_graph_caller(void)
   | ^
 arch/parisc/kernel/ftrace.c:88:5: error: no previous prototype for 
'ftrace_disable_ftrace_graph_caller' [-Werror=missing-prototypes]
88 | int ftrace_disable_ftrace_graph_caller(void)
   | ^~

Signed-off-by: Max Kellermann 
Signed-off-by: Helge Deller 
Signed-off-by: Sasha Levin 
---
 arch/parisc/kernel/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index 0a1e75af5382d..44d70fc30aae5 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -81,7 +81,7 @@ void notrace __hot ftrace_function_trampoline(unsigned long 
parent,
 #endif
 }
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
 int ftrace_enable_ftrace_graph_caller(void)
 {
return 0;
-- 
2.43.0




[PATCH AUTOSEL 6.1 08/12] parisc/ftrace: add missing CONFIG_DYNAMIC_FTRACE check

2024-02-29 Thread Sasha Levin
From: Max Kellermann 

[ Upstream commit 250f5402e636a5cec9e0e95df252c3d54307210f ]

Fixes a bug revealed by -Wmissing-prototypes when
CONFIG_FUNCTION_GRAPH_TRACER is enabled but not CONFIG_DYNAMIC_FTRACE:

 arch/parisc/kernel/ftrace.c:82:5: error: no previous prototype for 
'ftrace_enable_ftrace_graph_caller' [-Werror=missing-prototypes]
82 | int ftrace_enable_ftrace_graph_caller(void)
   | ^
 arch/parisc/kernel/ftrace.c:88:5: error: no previous prototype for 
'ftrace_disable_ftrace_graph_caller' [-Werror=missing-prototypes]
88 | int ftrace_disable_ftrace_graph_caller(void)
   | ^~

Signed-off-by: Max Kellermann 
Signed-off-by: Helge Deller 
Signed-off-by: Sasha Levin 
---
 arch/parisc/kernel/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index 4d392e4ed3584..ac7253891d5ed 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -78,7 +78,7 @@ void notrace __hot ftrace_function_trampoline(unsigned long 
parent,
 #endif
 }
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
 int ftrace_enable_ftrace_graph_caller(void)
 {
static_key_enable(_graph_enable.key);
-- 
2.43.0




[PATCH AUTOSEL 6.6 10/22] parisc/ftrace: add missing CONFIG_DYNAMIC_FTRACE check

2024-02-29 Thread Sasha Levin
From: Max Kellermann 

[ Upstream commit 250f5402e636a5cec9e0e95df252c3d54307210f ]

Fixes a bug revealed by -Wmissing-prototypes when
CONFIG_FUNCTION_GRAPH_TRACER is enabled but not CONFIG_DYNAMIC_FTRACE:

 arch/parisc/kernel/ftrace.c:82:5: error: no previous prototype for 
'ftrace_enable_ftrace_graph_caller' [-Werror=missing-prototypes]
82 | int ftrace_enable_ftrace_graph_caller(void)
   | ^
 arch/parisc/kernel/ftrace.c:88:5: error: no previous prototype for 
'ftrace_disable_ftrace_graph_caller' [-Werror=missing-prototypes]
88 | int ftrace_disable_ftrace_graph_caller(void)
   | ^~

Signed-off-by: Max Kellermann 
Signed-off-by: Helge Deller 
Signed-off-by: Sasha Levin 
---
 arch/parisc/kernel/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index d1defb9ede70c..621a4b386ae4f 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -78,7 +78,7 @@ asmlinkage void notrace __hot 
ftrace_function_trampoline(unsigned long parent,
 #endif
 }
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
 int ftrace_enable_ftrace_graph_caller(void)
 {
static_key_enable(_graph_enable.key);
-- 
2.43.0




[PATCH AUTOSEL 6.7 11/24] parisc/ftrace: add missing CONFIG_DYNAMIC_FTRACE check

2024-02-29 Thread Sasha Levin
From: Max Kellermann 

[ Upstream commit 250f5402e636a5cec9e0e95df252c3d54307210f ]

Fixes a bug revealed by -Wmissing-prototypes when
CONFIG_FUNCTION_GRAPH_TRACER is enabled but not CONFIG_DYNAMIC_FTRACE:

 arch/parisc/kernel/ftrace.c:82:5: error: no previous prototype for 
'ftrace_enable_ftrace_graph_caller' [-Werror=missing-prototypes]
82 | int ftrace_enable_ftrace_graph_caller(void)
   | ^
 arch/parisc/kernel/ftrace.c:88:5: error: no previous prototype for 
'ftrace_disable_ftrace_graph_caller' [-Werror=missing-prototypes]
88 | int ftrace_disable_ftrace_graph_caller(void)
   | ^~

Signed-off-by: Max Kellermann 
Signed-off-by: Helge Deller 
Signed-off-by: Sasha Levin 
---
 arch/parisc/kernel/ftrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index d1defb9ede70c..621a4b386ae4f 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -78,7 +78,7 @@ asmlinkage void notrace __hot 
ftrace_function_trampoline(unsigned long parent,
 #endif
 }
 
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+#if defined(CONFIG_DYNAMIC_FTRACE) && defined(CONFIG_FUNCTION_GRAPH_TRACER)
 int ftrace_enable_ftrace_graph_caller(void)
 {
static_key_enable(_graph_enable.key);
-- 
2.43.0




[PATCH AUTOSEL 4.19 5/8] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 331d74f9281b..2b012d7165cd 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2727,10 +2727,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -2767,8 +2768,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 5.4 08/11] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index f6a6678f43b9..4faf3275b1f6 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2864,10 +2864,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -2904,8 +2905,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 5.10 09/13] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 2fd5d2b7a209..4029c56dfcf0 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2819,10 +2819,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -2859,8 +2860,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 5.15 14/19] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 3eefe8171925..6a655bd442fe 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2913,10 +2913,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -2953,8 +2954,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 6.1 18/27] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 21d3461fb5d1..45f1a871b7da 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3474,10 +3474,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -3514,8 +3515,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 6.6 21/31] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index deb2229ab4d8..7cb0548d17a3 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -4096,10 +4096,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -4136,8 +4137,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 6.7 29/39] virtio_net: Fix "‘%d’ directive writing between 1 and 11 bytes into a region of size 10" warnings

2024-01-28 Thread Sasha Levin
From: Zhu Yanjun 

[ Upstream commit e3fe8d28c67bf6c291e920c6d04fa22afa14e6e4 ]

Fix the warnings when building virtio_net driver.

"
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4551:48: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 10 [-Wformat-overflow=]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  |^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4551:41: note: directive argument in the range 
[-2147483643, 65534]
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c:4551:17: note: ‘sprintf’ output between 8 and 18 bytes 
into a destination of size 16
 4551 | sprintf(vi->rq[i].name, "input.%d", i);
  | ^~
drivers/net/virtio_net.c: In function ‘init_vqs’:
drivers/net/virtio_net.c:4552:49: warning: ‘%d’ directive writing between 1 and 
11 bytes into a region of size 9 [-Wformat-overflow=]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~
In function ‘virtnet_find_vqs’,
inlined from ‘init_vqs’ at drivers/net/virtio_net.c:4645:8:
drivers/net/virtio_net.c:4552:41: note: directive argument in the range 
[-2147483643, 65534]
 4552 | sprintf(vi->sq[i].name, "output.%d", i);
  | ^~~
drivers/net/virtio_net.c:4552:17: note: ‘sprintf’ output between 9 and 19 bytes 
into a destination of size 16
 4552 | sprintf(vi->sq[i].name, "output.%d", i);

"

Reviewed-by: Xuan Zhuo 
Signed-off-by: Zhu Yanjun 
Link: https://lore.kernel.org/r/20240104020902.2753599-1-yanjun@intel.com
Signed-off-by: Jakub Kicinski 
Signed-off-by: Sasha Levin 
---
 drivers/net/virtio_net.c | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 51b1868d2f22..1caf21fd5032 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -4096,10 +4096,11 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 {
vq_callback_t **callbacks;
struct virtqueue **vqs;
-   int ret = -ENOMEM;
-   int i, total_vqs;
const char **names;
+   int ret = -ENOMEM;
+   int total_vqs;
bool *ctx;
+   u16 i;
 
/* We expect 1 RX virtqueue followed by 1 TX virtqueue, followed by
 * possible N-1 RX/TX queue pairs used in multiqueue mode, followed by
@@ -4136,8 +4137,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
for (i = 0; i < vi->max_queue_pairs; i++) {
callbacks[rxq2vq(i)] = skb_recv_done;
callbacks[txq2vq(i)] = skb_xmit_done;
-   sprintf(vi->rq[i].name, "input.%d", i);
-   sprintf(vi->sq[i].name, "output.%d", i);
+   sprintf(vi->rq[i].name, "input.%u", i);
+   sprintf(vi->sq[i].name, "output.%u", i);
names[rxq2vq(i)] = vi->rq[i].name;
names[txq2vq(i)] = vi->sq[i].name;
if (ctx)
-- 
2.43.0




[PATCH AUTOSEL 6.7 19/39] tracefs/eventfs: Use root and instance inodes as default ownership

2024-01-28 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 8186fff7ab649085e2c60d032d9a20a85af1d87c ]

Instead of walking the dentries on mount/remount to update the gid values of
all the dentries if a gid option is specified on mount, just update the root
inode. Add .getattr, .setattr, and .permissions on the tracefs inode
operations to update the permissions of the files and directories.

For all files and directories in the top level instance:

 /sys/kernel/tracing/*

It will use the root inode as the default permissions. The inode that
represents: /sys/kernel/tracing (or wherever it is mounted).

When an instance is created:

 mkdir /sys/kernel/tracing/instance/foo

The directory "foo" and all its files and directories underneath will use
the default of what foo is when it was created. A remount of tracefs will
not affect it.

If a user were to modify the permissions of any file or directory in
tracefs, it will also no longer be modified by a change in ownership of a
remount.

The events directory, if it is in the top level instance, will use the
tracefs root inode as the default ownership for itself and all the files and
directories below it.

For the events directory in an instance ("foo"), it will keep the ownership
of what it was when it was created, and that will be used as the default
ownership for the files and directories beneath it.

Link: 
https://lore.kernel.org/linux-trace-kernel/CAHk-=wjvdgkjdxbbvln2wbznqp4ush46e3gqj9m7ug6dpx2...@mail.gmail.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20240103215016.1e0c9...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mathieu Desnoyers 
Cc: Linus Torvalds 
Cc: Al Viro 
Cc: Christian Brauner 
Cc: Greg Kroah-Hartman 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 fs/tracefs/event_inode.c |  79 +++-
 fs/tracefs/inode.c   | 198 ++-
 fs/tracefs/internal.h|   3 +
 3 files changed, 190 insertions(+), 90 deletions(-)

diff --git a/fs/tracefs/event_inode.c b/fs/tracefs/event_inode.c
index f0677ea0ec24..517f1ae1a058 100644
--- a/fs/tracefs/event_inode.c
+++ b/fs/tracefs/event_inode.c
@@ -45,6 +45,7 @@ enum {
EVENTFS_SAVE_MODE   = BIT(16),
EVENTFS_SAVE_UID= BIT(17),
EVENTFS_SAVE_GID= BIT(18),
+   EVENTFS_TOPLEVEL= BIT(19),
 };
 
 #define EVENTFS_MODE_MASK  (EVENTFS_SAVE_MODE - 1)
@@ -117,10 +118,17 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, 
struct dentry *dentry,
 * The events directory dentry is never freed, unless its
 * part of an instance that is deleted. It's attr is the
 * default for its child files and directories.
-* Do not update it. It's not used for its own mode or ownership
+* Do not update it. It's not used for its own mode or 
ownership.
 */
-   if (!ei->is_events)
+   if (ei->is_events) {
+   /* But it still needs to know if it was modified */
+   if (iattr->ia_valid & ATTR_UID)
+   ei->attr.mode |= EVENTFS_SAVE_UID;
+   if (iattr->ia_valid & ATTR_GID)
+   ei->attr.mode |= EVENTFS_SAVE_GID;
+   } else {
update_attr(>attr, iattr);
+   }
 
} else {
name = dentry->d_name.name;
@@ -138,9 +146,66 @@ static int eventfs_set_attr(struct mnt_idmap *idmap, 
struct dentry *dentry,
return ret;
 }
 
+static void update_top_events_attr(struct eventfs_inode *ei, struct dentry 
*dentry)
+{
+   struct inode *inode;
+
+   /* Only update if the "events" was on the top level */
+   if (!ei || !(ei->attr.mode & EVENTFS_TOPLEVEL))
+   return;
+
+   /* Get the tracefs root inode. */
+   inode = d_inode(dentry->d_sb->s_root);
+   ei->attr.uid = inode->i_uid;
+   ei->attr.gid = inode->i_gid;
+}
+
+static void set_top_events_ownership(struct inode *inode)
+{
+   struct tracefs_inode *ti = get_tracefs(inode);
+   struct eventfs_inode *ei = ti->private;
+   struct dentry *dentry;
+
+   /* The top events directory doesn't get automatically updated */
+   if (!ei || !ei->is_events || !(ei->attr.mode & EVENTFS_TOPLEVEL))
+   return;
+
+   dentry = ei->dentry;
+
+   update_top_events_attr(ei, dentry);
+
+   if (!(ei->attr.mode & EVENTFS_SAVE_UID))
+   inode->i_uid = ei->attr.uid;
+
+   if (!(ei->attr.mode & EVENTFS_SAVE_GID))
+   inode->i_gid = ei->attr.gid;
+}
+
+static int eventfs_get_attr(struct mnt_idmap *idmap,
+   const struct path *path, struct kstat *stat,
+   u32 request_mask, unsigned int flags)
+{
+ 

[PATCH AUTOSEL 6.7 13/39] ring-buffer: Do no swap cpu buffers if order is different

2024-01-28 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b81e03a24966dca0b119eff0549a4e44befff419 ]

As all the subbuffer order (subbuffer sizes) must be the same throughout
the ring buffer, check the order of the buffers that are doing a CPU
buffer swap in ring_buffer_swap_cpu() to make sure they are the same.

If the are not the same, then fail to do the swap, otherwise the ring
buffer will think the CPU buffer has a specific subbuffer size when it
does not.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231219185629.467894...@goodmis.org

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Cc: Andrew Morton 
Cc: Tzvetomir Stoyanov 
Cc: Vincent Donnefort 
Cc: Kent Overstreet 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 9286f88fcd32..f9d9309884d1 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -5461,6 +5461,9 @@ int ring_buffer_swap_cpu(struct trace_buffer *buffer_a,
if (cpu_buffer_a->nr_pages != cpu_buffer_b->nr_pages)
goto out;
 
+   if (buffer_a->subbuf_order != buffer_b->subbuf_order)
+   goto out;
+
ret = -EAGAIN;
 
if (atomic_read(_a->record_disabled))
-- 
2.43.0




[PATCH AUTOSEL 5.15 01/11] arch: consolidate arch_irq_work_raise prototypes

2024-01-15 Thread Sasha Levin
From: Arnd Bergmann 

[ Upstream commit 64bac5ea17d527872121adddfee869c7a0618f8f ]

The prototype was hidden in an #ifdef on x86, which causes a warning:

kernel/irq_work.c:72:13: error: no previous prototype for 'arch_irq_work_raise' 
[-Werror=missing-prototypes]

Some architectures have a working prototype, while others don't.
Fix this by providing it in only one place that is always visible.

Reviewed-by: Alexander Gordeev 
Acked-by: Catalin Marinas 
Acked-by: Palmer Dabbelt 
Acked-by: Guo Ren 
Signed-off-by: Arnd Bergmann 
Signed-off-by: Sasha Levin 
---
 arch/arm64/include/asm/irq_work.h   | 2 --
 arch/csky/include/asm/irq_work.h| 2 +-
 arch/powerpc/include/asm/irq_work.h | 1 -
 arch/riscv/include/asm/irq_work.h   | 2 +-
 arch/s390/include/asm/irq_work.h| 2 --
 arch/x86/include/asm/irq_work.h | 1 -
 include/linux/irq_work.h| 3 +++
 7 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/include/asm/irq_work.h 
b/arch/arm64/include/asm/irq_work.h
index 81bbfa3a035b..a1020285ea75 100644
--- a/arch/arm64/include/asm/irq_work.h
+++ b/arch/arm64/include/asm/irq_work.h
@@ -2,8 +2,6 @@
 #ifndef __ASM_IRQ_WORK_H
 #define __ASM_IRQ_WORK_H
 
-extern void arch_irq_work_raise(void);
-
 static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
diff --git a/arch/csky/include/asm/irq_work.h b/arch/csky/include/asm/irq_work.h
index 33aaf39d6f94..d39fcc1f5395 100644
--- a/arch/csky/include/asm/irq_work.h
+++ b/arch/csky/include/asm/irq_work.h
@@ -7,5 +7,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* __ASM_CSKY_IRQ_WORK_H */
diff --git a/arch/powerpc/include/asm/irq_work.h 
b/arch/powerpc/include/asm/irq_work.h
index b8b0be8f1a07..c6d3078bd8c3 100644
--- a/arch/powerpc/include/asm/irq_work.h
+++ b/arch/powerpc/include/asm/irq_work.h
@@ -6,6 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
 
 #endif /* _ASM_POWERPC_IRQ_WORK_H */
diff --git a/arch/riscv/include/asm/irq_work.h 
b/arch/riscv/include/asm/irq_work.h
index b53891964ae0..b27a4d64fc6a 100644
--- a/arch/riscv/include/asm/irq_work.h
+++ b/arch/riscv/include/asm/irq_work.h
@@ -6,5 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return IS_ENABLED(CONFIG_SMP);
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* _ASM_RISCV_IRQ_WORK_H */
diff --git a/arch/s390/include/asm/irq_work.h b/arch/s390/include/asm/irq_work.h
index 603783766d0a..f00c9f610d5a 100644
--- a/arch/s390/include/asm/irq_work.h
+++ b/arch/s390/include/asm/irq_work.h
@@ -7,6 +7,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return true;
 }
 
-void arch_irq_work_raise(void);
-
 #endif /* _ASM_S390_IRQ_WORK_H */
diff --git a/arch/x86/include/asm/irq_work.h b/arch/x86/include/asm/irq_work.h
index 800ffce0db29..6b4d36c95165 100644
--- a/arch/x86/include/asm/irq_work.h
+++ b/arch/x86/include/asm/irq_work.h
@@ -9,7 +9,6 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return boot_cpu_has(X86_FEATURE_APIC);
 }
-extern void arch_irq_work_raise(void);
 #else
 static inline bool arch_irq_work_has_interrupt(void)
 {
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index ec2a47a81e42..ee5f9120c4d7 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -58,6 +58,9 @@ void irq_work_sync(struct irq_work *work);
 void irq_work_run(void);
 bool irq_work_needs_cpu(void);
 void irq_work_single(void *arg);
+
+void arch_irq_work_raise(void);
+
 #else
 static inline bool irq_work_needs_cpu(void) { return false; }
 static inline void irq_work_run(void) { }
-- 
2.43.0




[PATCH AUTOSEL 6.1 01/14] arch: consolidate arch_irq_work_raise prototypes

2024-01-15 Thread Sasha Levin
From: Arnd Bergmann 

[ Upstream commit 64bac5ea17d527872121adddfee869c7a0618f8f ]

The prototype was hidden in an #ifdef on x86, which causes a warning:

kernel/irq_work.c:72:13: error: no previous prototype for 'arch_irq_work_raise' 
[-Werror=missing-prototypes]

Some architectures have a working prototype, while others don't.
Fix this by providing it in only one place that is always visible.

Reviewed-by: Alexander Gordeev 
Acked-by: Catalin Marinas 
Acked-by: Palmer Dabbelt 
Acked-by: Guo Ren 
Signed-off-by: Arnd Bergmann 
Signed-off-by: Sasha Levin 
---
 arch/arm/include/asm/irq_work.h | 2 --
 arch/arm64/include/asm/irq_work.h   | 2 --
 arch/csky/include/asm/irq_work.h| 2 +-
 arch/powerpc/include/asm/irq_work.h | 1 -
 arch/riscv/include/asm/irq_work.h   | 2 +-
 arch/s390/include/asm/irq_work.h| 2 --
 arch/x86/include/asm/irq_work.h | 1 -
 include/linux/irq_work.h| 3 +++
 8 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/irq_work.h b/arch/arm/include/asm/irq_work.h
index 3149e4dc1b54..8895999834cc 100644
--- a/arch/arm/include/asm/irq_work.h
+++ b/arch/arm/include/asm/irq_work.h
@@ -9,6 +9,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return is_smp();
 }
 
-extern void arch_irq_work_raise(void);
-
 #endif /* _ASM_ARM_IRQ_WORK_H */
diff --git a/arch/arm64/include/asm/irq_work.h 
b/arch/arm64/include/asm/irq_work.h
index 81bbfa3a035b..a1020285ea75 100644
--- a/arch/arm64/include/asm/irq_work.h
+++ b/arch/arm64/include/asm/irq_work.h
@@ -2,8 +2,6 @@
 #ifndef __ASM_IRQ_WORK_H
 #define __ASM_IRQ_WORK_H
 
-extern void arch_irq_work_raise(void);
-
 static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
diff --git a/arch/csky/include/asm/irq_work.h b/arch/csky/include/asm/irq_work.h
index 33aaf39d6f94..d39fcc1f5395 100644
--- a/arch/csky/include/asm/irq_work.h
+++ b/arch/csky/include/asm/irq_work.h
@@ -7,5 +7,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* __ASM_CSKY_IRQ_WORK_H */
diff --git a/arch/powerpc/include/asm/irq_work.h 
b/arch/powerpc/include/asm/irq_work.h
index b8b0be8f1a07..c6d3078bd8c3 100644
--- a/arch/powerpc/include/asm/irq_work.h
+++ b/arch/powerpc/include/asm/irq_work.h
@@ -6,6 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
 
 #endif /* _ASM_POWERPC_IRQ_WORK_H */
diff --git a/arch/riscv/include/asm/irq_work.h 
b/arch/riscv/include/asm/irq_work.h
index b53891964ae0..b27a4d64fc6a 100644
--- a/arch/riscv/include/asm/irq_work.h
+++ b/arch/riscv/include/asm/irq_work.h
@@ -6,5 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return IS_ENABLED(CONFIG_SMP);
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* _ASM_RISCV_IRQ_WORK_H */
diff --git a/arch/s390/include/asm/irq_work.h b/arch/s390/include/asm/irq_work.h
index 603783766d0a..f00c9f610d5a 100644
--- a/arch/s390/include/asm/irq_work.h
+++ b/arch/s390/include/asm/irq_work.h
@@ -7,6 +7,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return true;
 }
 
-void arch_irq_work_raise(void);
-
 #endif /* _ASM_S390_IRQ_WORK_H */
diff --git a/arch/x86/include/asm/irq_work.h b/arch/x86/include/asm/irq_work.h
index 800ffce0db29..6b4d36c95165 100644
--- a/arch/x86/include/asm/irq_work.h
+++ b/arch/x86/include/asm/irq_work.h
@@ -9,7 +9,6 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return boot_cpu_has(X86_FEATURE_APIC);
 }
-extern void arch_irq_work_raise(void);
 #else
 static inline bool arch_irq_work_has_interrupt(void)
 {
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 8cd11a223260..136f2980cba3 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -66,6 +66,9 @@ void irq_work_sync(struct irq_work *work);
 void irq_work_run(void);
 bool irq_work_needs_cpu(void);
 void irq_work_single(void *arg);
+
+void arch_irq_work_raise(void);
+
 #else
 static inline bool irq_work_needs_cpu(void) { return false; }
 static inline void irq_work_run(void) { }
-- 
2.43.0




[PATCH AUTOSEL 6.6 02/19] arch: consolidate arch_irq_work_raise prototypes

2024-01-15 Thread Sasha Levin
From: Arnd Bergmann 

[ Upstream commit 64bac5ea17d527872121adddfee869c7a0618f8f ]

The prototype was hidden in an #ifdef on x86, which causes a warning:

kernel/irq_work.c:72:13: error: no previous prototype for 'arch_irq_work_raise' 
[-Werror=missing-prototypes]

Some architectures have a working prototype, while others don't.
Fix this by providing it in only one place that is always visible.

Reviewed-by: Alexander Gordeev 
Acked-by: Catalin Marinas 
Acked-by: Palmer Dabbelt 
Acked-by: Guo Ren 
Signed-off-by: Arnd Bergmann 
Signed-off-by: Sasha Levin 
---
 arch/arm/include/asm/irq_work.h | 2 --
 arch/arm64/include/asm/irq_work.h   | 2 --
 arch/csky/include/asm/irq_work.h| 2 +-
 arch/powerpc/include/asm/irq_work.h | 1 -
 arch/riscv/include/asm/irq_work.h   | 2 +-
 arch/s390/include/asm/irq_work.h| 2 --
 arch/x86/include/asm/irq_work.h | 1 -
 include/linux/irq_work.h| 3 +++
 8 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/irq_work.h b/arch/arm/include/asm/irq_work.h
index 3149e4dc1b54..8895999834cc 100644
--- a/arch/arm/include/asm/irq_work.h
+++ b/arch/arm/include/asm/irq_work.h
@@ -9,6 +9,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return is_smp();
 }
 
-extern void arch_irq_work_raise(void);
-
 #endif /* _ASM_ARM_IRQ_WORK_H */
diff --git a/arch/arm64/include/asm/irq_work.h 
b/arch/arm64/include/asm/irq_work.h
index 81bbfa3a035b..a1020285ea75 100644
--- a/arch/arm64/include/asm/irq_work.h
+++ b/arch/arm64/include/asm/irq_work.h
@@ -2,8 +2,6 @@
 #ifndef __ASM_IRQ_WORK_H
 #define __ASM_IRQ_WORK_H
 
-extern void arch_irq_work_raise(void);
-
 static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
diff --git a/arch/csky/include/asm/irq_work.h b/arch/csky/include/asm/irq_work.h
index 33aaf39d6f94..d39fcc1f5395 100644
--- a/arch/csky/include/asm/irq_work.h
+++ b/arch/csky/include/asm/irq_work.h
@@ -7,5 +7,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* __ASM_CSKY_IRQ_WORK_H */
diff --git a/arch/powerpc/include/asm/irq_work.h 
b/arch/powerpc/include/asm/irq_work.h
index b8b0be8f1a07..c6d3078bd8c3 100644
--- a/arch/powerpc/include/asm/irq_work.h
+++ b/arch/powerpc/include/asm/irq_work.h
@@ -6,6 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
 
 #endif /* _ASM_POWERPC_IRQ_WORK_H */
diff --git a/arch/riscv/include/asm/irq_work.h 
b/arch/riscv/include/asm/irq_work.h
index b53891964ae0..b27a4d64fc6a 100644
--- a/arch/riscv/include/asm/irq_work.h
+++ b/arch/riscv/include/asm/irq_work.h
@@ -6,5 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return IS_ENABLED(CONFIG_SMP);
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* _ASM_RISCV_IRQ_WORK_H */
diff --git a/arch/s390/include/asm/irq_work.h b/arch/s390/include/asm/irq_work.h
index 603783766d0a..f00c9f610d5a 100644
--- a/arch/s390/include/asm/irq_work.h
+++ b/arch/s390/include/asm/irq_work.h
@@ -7,6 +7,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return true;
 }
 
-void arch_irq_work_raise(void);
-
 #endif /* _ASM_S390_IRQ_WORK_H */
diff --git a/arch/x86/include/asm/irq_work.h b/arch/x86/include/asm/irq_work.h
index 800ffce0db29..6b4d36c95165 100644
--- a/arch/x86/include/asm/irq_work.h
+++ b/arch/x86/include/asm/irq_work.h
@@ -9,7 +9,6 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return boot_cpu_has(X86_FEATURE_APIC);
 }
-extern void arch_irq_work_raise(void);
 #else
 static inline bool arch_irq_work_has_interrupt(void)
 {
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 8cd11a223260..136f2980cba3 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -66,6 +66,9 @@ void irq_work_sync(struct irq_work *work);
 void irq_work_run(void);
 bool irq_work_needs_cpu(void);
 void irq_work_single(void *arg);
+
+void arch_irq_work_raise(void);
+
 #else
 static inline bool irq_work_needs_cpu(void) { return false; }
 static inline void irq_work_run(void) { }
-- 
2.43.0




[PATCH AUTOSEL 6.7 02/21] arch: consolidate arch_irq_work_raise prototypes

2024-01-15 Thread Sasha Levin
From: Arnd Bergmann 

[ Upstream commit 64bac5ea17d527872121adddfee869c7a0618f8f ]

The prototype was hidden in an #ifdef on x86, which causes a warning:

kernel/irq_work.c:72:13: error: no previous prototype for 'arch_irq_work_raise' 
[-Werror=missing-prototypes]

Some architectures have a working prototype, while others don't.
Fix this by providing it in only one place that is always visible.

Reviewed-by: Alexander Gordeev 
Acked-by: Catalin Marinas 
Acked-by: Palmer Dabbelt 
Acked-by: Guo Ren 
Signed-off-by: Arnd Bergmann 
Signed-off-by: Sasha Levin 
---
 arch/arm/include/asm/irq_work.h | 2 --
 arch/arm64/include/asm/irq_work.h   | 2 --
 arch/csky/include/asm/irq_work.h| 2 +-
 arch/powerpc/include/asm/irq_work.h | 1 -
 arch/riscv/include/asm/irq_work.h   | 2 +-
 arch/s390/include/asm/irq_work.h| 2 --
 arch/x86/include/asm/irq_work.h | 1 -
 include/linux/irq_work.h| 3 +++
 8 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/irq_work.h b/arch/arm/include/asm/irq_work.h
index 3149e4dc1b54..8895999834cc 100644
--- a/arch/arm/include/asm/irq_work.h
+++ b/arch/arm/include/asm/irq_work.h
@@ -9,6 +9,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return is_smp();
 }
 
-extern void arch_irq_work_raise(void);
-
 #endif /* _ASM_ARM_IRQ_WORK_H */
diff --git a/arch/arm64/include/asm/irq_work.h 
b/arch/arm64/include/asm/irq_work.h
index 81bbfa3a035b..a1020285ea75 100644
--- a/arch/arm64/include/asm/irq_work.h
+++ b/arch/arm64/include/asm/irq_work.h
@@ -2,8 +2,6 @@
 #ifndef __ASM_IRQ_WORK_H
 #define __ASM_IRQ_WORK_H
 
-extern void arch_irq_work_raise(void);
-
 static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
diff --git a/arch/csky/include/asm/irq_work.h b/arch/csky/include/asm/irq_work.h
index 33aaf39d6f94..d39fcc1f5395 100644
--- a/arch/csky/include/asm/irq_work.h
+++ b/arch/csky/include/asm/irq_work.h
@@ -7,5 +7,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* __ASM_CSKY_IRQ_WORK_H */
diff --git a/arch/powerpc/include/asm/irq_work.h 
b/arch/powerpc/include/asm/irq_work.h
index b8b0be8f1a07..c6d3078bd8c3 100644
--- a/arch/powerpc/include/asm/irq_work.h
+++ b/arch/powerpc/include/asm/irq_work.h
@@ -6,6 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return true;
 }
-extern void arch_irq_work_raise(void);
 
 #endif /* _ASM_POWERPC_IRQ_WORK_H */
diff --git a/arch/riscv/include/asm/irq_work.h 
b/arch/riscv/include/asm/irq_work.h
index b53891964ae0..b27a4d64fc6a 100644
--- a/arch/riscv/include/asm/irq_work.h
+++ b/arch/riscv/include/asm/irq_work.h
@@ -6,5 +6,5 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return IS_ENABLED(CONFIG_SMP);
 }
-extern void arch_irq_work_raise(void);
+
 #endif /* _ASM_RISCV_IRQ_WORK_H */
diff --git a/arch/s390/include/asm/irq_work.h b/arch/s390/include/asm/irq_work.h
index 603783766d0a..f00c9f610d5a 100644
--- a/arch/s390/include/asm/irq_work.h
+++ b/arch/s390/include/asm/irq_work.h
@@ -7,6 +7,4 @@ static inline bool arch_irq_work_has_interrupt(void)
return true;
 }
 
-void arch_irq_work_raise(void);
-
 #endif /* _ASM_S390_IRQ_WORK_H */
diff --git a/arch/x86/include/asm/irq_work.h b/arch/x86/include/asm/irq_work.h
index 800ffce0db29..6b4d36c95165 100644
--- a/arch/x86/include/asm/irq_work.h
+++ b/arch/x86/include/asm/irq_work.h
@@ -9,7 +9,6 @@ static inline bool arch_irq_work_has_interrupt(void)
 {
return boot_cpu_has(X86_FEATURE_APIC);
 }
-extern void arch_irq_work_raise(void);
 #else
 static inline bool arch_irq_work_has_interrupt(void)
 {
diff --git a/include/linux/irq_work.h b/include/linux/irq_work.h
index 8cd11a223260..136f2980cba3 100644
--- a/include/linux/irq_work.h
+++ b/include/linux/irq_work.h
@@ -66,6 +66,9 @@ void irq_work_sync(struct irq_work *work);
 void irq_work_run(void);
 bool irq_work_needs_cpu(void);
 void irq_work_single(void *arg);
+
+void arch_irq_work_raise(void);
+
 #else
 static inline bool irq_work_needs_cpu(void) { return false; }
 static inline void irq_work_run(void) { }
-- 
2.43.0




[PATCH AUTOSEL 4.14 6/6] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f0d4ff2db2ef0..b1acec3e4dc3b 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2774,6 +2774,12 @@ rb_reserve_next_event(struct ring_buffer *buffer,
int nr_loops = 0;
u64 diff;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
 
 #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP
-- 
2.43.0




[PATCH AUTOSEL 4.14 5/6] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index e3ab66e6fd85c..3ca9ddfef2b8f 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1319,11 +1319,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1332,10 +1333,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 4.14 4/6] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index c5fe020336bea..755d6146c738c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3826,7 +3826,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 4.19 6/6] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 320aa60664dc9..efa11c638c841 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2889,6 +2889,12 @@ rb_reserve_next_event(struct ring_buffer *buffer,
int nr_loops = 0;
u64 diff;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
 
 #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP
-- 
2.43.0




[PATCH AUTOSEL 4.19 5/6] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 62015d62dd6f5..43fb832d26d23 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1320,11 +1320,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1333,10 +1334,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 4.19 4/6] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b43d681b072fa..e6b2d443bab9e 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3828,7 +3828,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 5.4 7/7] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 9d6ba38791961..983fc4475c273 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2969,6 +2969,12 @@ rb_reserve_next_event(struct ring_buffer *buffer,
int nr_loops = 0;
u64 diff;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
 
 #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP
-- 
2.43.0




[PATCH AUTOSEL 5.4 6/7] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index b194dd1c8420f..9ffe54ff3edb2 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1291,11 +1291,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1304,10 +1305,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 5.4 5/7] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d7ca8f97b315f..35c1500855566 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4078,7 +4078,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 5.10 8/8] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 7e1148aafd284..e58eba8419b12 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3431,6 +3431,12 @@ rb_reserve_next_event(struct trace_buffer *buffer,
int nr_loops = 0;
int add_ts_default;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
/* The commit page can not change after this */
 
-- 
2.43.0




[PATCH AUTOSEL 5.10 7/8] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 94b0991717b6d..753b84c50848a 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1313,11 +1313,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1326,10 +1327,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 5.10 6/8] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 4e0411b19ef96..6960934b961b9 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4325,7 +4325,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 5.15 13/13] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 16fce72a7601c..3f80497ac60ff 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3668,6 +3668,12 @@ rb_reserve_next_event(struct trace_buffer *buffer,
int nr_loops = 0;
int add_ts_default;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
/* The commit page can not change after this */
 
-- 
2.43.0




[PATCH AUTOSEL 5.15 12/13] tracing: Fix uaf issue when open the hist or hist_debug file

2023-12-18 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit 1cc111b9cddc71ce161cd388f11f0e9048edffdb ]

KASAN report following issue. The root cause is when opening 'hist'
file of an instance and accessing 'trace_event_file' in hist_show(),
but 'trace_event_file' has been freed due to the instance being removed.
'hist_debug' file has the same problem. To fix it, call
tracing_{open,release}_file_tr() in file_operations callback to have
the ref count and avoid 'trace_event_file' being freed.

  BUG: KASAN: slab-use-after-free in hist_show+0x11e0/0x1278
  Read of size 8 at addr 242541e336b8 by task head/190

  CPU: 4 PID: 190 Comm: head Not tainted 6.7.0-rc5-g26aff849438c #133
  Hardware name: linux,dummy-virt (DT)
  Call trace:
   dump_backtrace+0x98/0xf8
   show_stack+0x1c/0x30
   dump_stack_lvl+0x44/0x58
   print_report+0xf0/0x5a0
   kasan_report+0x80/0xc0
   __asan_report_load8_noabort+0x1c/0x28
   hist_show+0x11e0/0x1278
   seq_read_iter+0x344/0xd78
   seq_read+0x128/0x1c0
   vfs_read+0x198/0x6c8
   ksys_read+0xf4/0x1e0
   __arm64_sys_read+0x70/0xa8
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

  Allocated by task 188:
   kasan_save_stack+0x28/0x50
   kasan_set_track+0x28/0x38
   kasan_save_alloc_info+0x20/0x30
   __kasan_slab_alloc+0x6c/0x80
   kmem_cache_alloc+0x15c/0x4a8
   trace_create_new_event+0x84/0x348
   __trace_add_new_event+0x18/0x88
   event_trace_add_tracer+0xc4/0x1a0
   trace_array_create_dir+0x6c/0x100
   trace_array_create+0x2e8/0x568
   instance_mkdir+0x48/0x80
   tracefs_syscall_mkdir+0x90/0xe8
   vfs_mkdir+0x3c4/0x610
   do_mkdirat+0x144/0x200
   __arm64_sys_mkdirat+0x8c/0xc0
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

  Freed by task 191:
   kasan_save_stack+0x28/0x50
   kasan_set_track+0x28/0x38
   kasan_save_free_info+0x34/0x58
   __kasan_slab_free+0xe4/0x158
   kmem_cache_free+0x19c/0x508
   event_file_put+0xa0/0x120
   remove_event_file_dir+0x180/0x320
   event_trace_del_tracer+0xb0/0x180
   __remove_instance+0x224/0x508
   instance_rmdir+0x44/0x78
   tracefs_syscall_rmdir+0xbc/0x140
   vfs_rmdir+0x1cc/0x4c8
   do_rmdir+0x220/0x2b8
   __arm64_sys_unlinkat+0xc0/0x100
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

Link: 
https://lore.kernel.org/linux-trace-kernel/20231214012153.676155-1-zhengyeji...@huawei.com

Suggested-by: Steven Rostedt 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c |  6 ++
 kernel/trace/trace.h |  1 +
 kernel/trace/trace_events_hist.c | 12 
 3 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 79b1bc4501dce..76d7b7519405c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4890,6 +4890,12 @@ int tracing_release_file_tr(struct inode *inode, struct 
file *filp)
return 0;
 }
 
+int tracing_single_release_file_tr(struct inode *inode, struct file *filp)
+{
+   tracing_release_file_tr(inode, filp);
+   return single_release(inode, filp);
+}
+
 static int tracing_mark_open(struct inode *inode, struct file *filp)
 {
stream_open(inode, filp);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index c6eb116dc279d..449a8bd873cf7 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -593,6 +593,7 @@ int tracing_open_generic(struct inode *inode, struct file 
*filp);
 int tracing_open_generic_tr(struct inode *inode, struct file *filp);
 int tracing_open_file_tr(struct inode *inode, struct file *filp);
 int tracing_release_file_tr(struct inode *inode, struct file *filp);
+int tracing_single_release_file_tr(struct inode *inode, struct file *filp);
 bool tracing_is_disabled(void);
 bool tracer_tracing_is_on(struct trace_array *tr);
 void tracer_tracing_on(struct trace_array *tr);
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index c32a53f089229..e7799814a3c8a 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -4946,10 +4946,12 @@ static int event_hist_open(struct inode *inode, struct 
file *file)
 {
int ret;
 
-   ret = security_locked_down(LOCKDOWN_TRACEFS);
+   ret = tracing_open_file_tr(inode, file);
if (ret)
return ret;
 
+   /* Clear private_data to avoid warning in single_open() */
+   file->private_data = NULL;
return single_open(file, hist_show, file);
 }
 
@@ -4957,7 +4959,7 @@ const struct file_operations event_hist_fops = {
.open = event_hist_open,
.read = seq_read,
.llseek = seq_lseek,
-   .rele

[PATCH AUTOSEL 5.15 09/13] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 6b4d3f3abdae2..4c4b84e507f74 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1446,11 +1446,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1459,10 +1460,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 5.15 08/13] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 657ecb8f03545..79b1bc4501dce 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4653,7 +4653,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 6.1 15/15] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index c02a4cb879913..c821b618a622f 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3711,6 +3711,12 @@ rb_reserve_next_event(struct trace_buffer *buffer,
int nr_loops = 0;
int add_ts_default;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
/* The commit page can not change after this */
 
-- 
2.43.0




[PATCH AUTOSEL 6.1 14/15] tracing: Fix uaf issue when open the hist or hist_debug file

2023-12-18 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit 1cc111b9cddc71ce161cd388f11f0e9048edffdb ]

KASAN report following issue. The root cause is when opening 'hist'
file of an instance and accessing 'trace_event_file' in hist_show(),
but 'trace_event_file' has been freed due to the instance being removed.
'hist_debug' file has the same problem. To fix it, call
tracing_{open,release}_file_tr() in file_operations callback to have
the ref count and avoid 'trace_event_file' being freed.

  BUG: KASAN: slab-use-after-free in hist_show+0x11e0/0x1278
  Read of size 8 at addr 242541e336b8 by task head/190

  CPU: 4 PID: 190 Comm: head Not tainted 6.7.0-rc5-g26aff849438c #133
  Hardware name: linux,dummy-virt (DT)
  Call trace:
   dump_backtrace+0x98/0xf8
   show_stack+0x1c/0x30
   dump_stack_lvl+0x44/0x58
   print_report+0xf0/0x5a0
   kasan_report+0x80/0xc0
   __asan_report_load8_noabort+0x1c/0x28
   hist_show+0x11e0/0x1278
   seq_read_iter+0x344/0xd78
   seq_read+0x128/0x1c0
   vfs_read+0x198/0x6c8
   ksys_read+0xf4/0x1e0
   __arm64_sys_read+0x70/0xa8
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

  Allocated by task 188:
   kasan_save_stack+0x28/0x50
   kasan_set_track+0x28/0x38
   kasan_save_alloc_info+0x20/0x30
   __kasan_slab_alloc+0x6c/0x80
   kmem_cache_alloc+0x15c/0x4a8
   trace_create_new_event+0x84/0x348
   __trace_add_new_event+0x18/0x88
   event_trace_add_tracer+0xc4/0x1a0
   trace_array_create_dir+0x6c/0x100
   trace_array_create+0x2e8/0x568
   instance_mkdir+0x48/0x80
   tracefs_syscall_mkdir+0x90/0xe8
   vfs_mkdir+0x3c4/0x610
   do_mkdirat+0x144/0x200
   __arm64_sys_mkdirat+0x8c/0xc0
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

  Freed by task 191:
   kasan_save_stack+0x28/0x50
   kasan_set_track+0x28/0x38
   kasan_save_free_info+0x34/0x58
   __kasan_slab_free+0xe4/0x158
   kmem_cache_free+0x19c/0x508
   event_file_put+0xa0/0x120
   remove_event_file_dir+0x180/0x320
   event_trace_del_tracer+0xb0/0x180
   __remove_instance+0x224/0x508
   instance_rmdir+0x44/0x78
   tracefs_syscall_rmdir+0xbc/0x140
   vfs_rmdir+0x1cc/0x4c8
   do_rmdir+0x220/0x2b8
   __arm64_sys_unlinkat+0xc0/0x100
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

Link: 
https://lore.kernel.org/linux-trace-kernel/20231214012153.676155-1-zhengyeji...@huawei.com

Suggested-by: Steven Rostedt 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c |  6 ++
 kernel/trace/trace.h |  1 +
 kernel/trace/trace_events_hist.c | 12 
 3 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index cfab12e266d98..36111de8b3833 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4902,6 +4902,12 @@ int tracing_release_file_tr(struct inode *inode, struct 
file *filp)
return 0;
 }
 
+int tracing_single_release_file_tr(struct inode *inode, struct file *filp)
+{
+   tracing_release_file_tr(inode, filp);
+   return single_release(inode, filp);
+}
+
 static int tracing_mark_open(struct inode *inode, struct file *filp)
 {
stream_open(inode, filp);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 10aaafa2936dc..aad7fcd84617c 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -592,6 +592,7 @@ int tracing_open_generic(struct inode *inode, struct file 
*filp);
 int tracing_open_generic_tr(struct inode *inode, struct file *filp);
 int tracing_open_file_tr(struct inode *inode, struct file *filp);
 int tracing_release_file_tr(struct inode *inode, struct file *filp);
+int tracing_single_release_file_tr(struct inode *inode, struct file *filp);
 bool tracing_is_disabled(void);
 bool tracer_tracing_is_on(struct trace_array *tr);
 void tracer_tracing_on(struct trace_array *tr);
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 1470af2190735..3b0da1bddf633 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5532,10 +5532,12 @@ static int event_hist_open(struct inode *inode, struct 
file *file)
 {
int ret;
 
-   ret = security_locked_down(LOCKDOWN_TRACEFS);
+   ret = tracing_open_file_tr(inode, file);
if (ret)
return ret;
 
+   /* Clear private_data to avoid warning in single_open() */
+   file->private_data = NULL;
return single_open(file, hist_show, file);
 }
 
@@ -5543,7 +5545,7 @@ const struct file_operations event_hist_fops = {
.open = event_hist_open,
.read = seq_read,
.llseek = seq_lseek,
-   .rele

[PATCH AUTOSEL 6.1 11/15] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index 5cd4fb6563068..bf1965b180992 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1445,11 +1445,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1458,10 +1459,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 6.1 10/15] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index d2db4d6f0f2fd..cfab12e266d98 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4665,7 +4665,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 6.6 18/18] ring-buffer: Do not record in NMI if the arch does not support cmpxchg in NMI

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 712292308af2265cd9b126aedfa987f10f452a33 ]

As the ring buffer recording requires cmpxchg() to work, if the
architecture does not support cmpxchg in NMI, then do not do any recording
within an NMI.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231213175403.6fc18...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f1ef4329343bf..30c8d01a4d08b 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -3714,6 +3714,12 @@ rb_reserve_next_event(struct trace_buffer *buffer,
int nr_loops = 0;
int add_ts_default;
 
+   /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+   if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
+   (unlikely(in_nmi( {
+   return NULL;
+   }
+
rb_start_commit(cpu_buffer);
/* The commit page can not change after this */
 
-- 
2.43.0




[PATCH AUTOSEL 6.6 17/18] tracing: Fix uaf issue when open the hist or hist_debug file

2023-12-18 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit 1cc111b9cddc71ce161cd388f11f0e9048edffdb ]

KASAN report following issue. The root cause is when opening 'hist'
file of an instance and accessing 'trace_event_file' in hist_show(),
but 'trace_event_file' has been freed due to the instance being removed.
'hist_debug' file has the same problem. To fix it, call
tracing_{open,release}_file_tr() in file_operations callback to have
the ref count and avoid 'trace_event_file' being freed.

  BUG: KASAN: slab-use-after-free in hist_show+0x11e0/0x1278
  Read of size 8 at addr 242541e336b8 by task head/190

  CPU: 4 PID: 190 Comm: head Not tainted 6.7.0-rc5-g26aff849438c #133
  Hardware name: linux,dummy-virt (DT)
  Call trace:
   dump_backtrace+0x98/0xf8
   show_stack+0x1c/0x30
   dump_stack_lvl+0x44/0x58
   print_report+0xf0/0x5a0
   kasan_report+0x80/0xc0
   __asan_report_load8_noabort+0x1c/0x28
   hist_show+0x11e0/0x1278
   seq_read_iter+0x344/0xd78
   seq_read+0x128/0x1c0
   vfs_read+0x198/0x6c8
   ksys_read+0xf4/0x1e0
   __arm64_sys_read+0x70/0xa8
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

  Allocated by task 188:
   kasan_save_stack+0x28/0x50
   kasan_set_track+0x28/0x38
   kasan_save_alloc_info+0x20/0x30
   __kasan_slab_alloc+0x6c/0x80
   kmem_cache_alloc+0x15c/0x4a8
   trace_create_new_event+0x84/0x348
   __trace_add_new_event+0x18/0x88
   event_trace_add_tracer+0xc4/0x1a0
   trace_array_create_dir+0x6c/0x100
   trace_array_create+0x2e8/0x568
   instance_mkdir+0x48/0x80
   tracefs_syscall_mkdir+0x90/0xe8
   vfs_mkdir+0x3c4/0x610
   do_mkdirat+0x144/0x200
   __arm64_sys_mkdirat+0x8c/0xc0
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

  Freed by task 191:
   kasan_save_stack+0x28/0x50
   kasan_set_track+0x28/0x38
   kasan_save_free_info+0x34/0x58
   __kasan_slab_free+0xe4/0x158
   kmem_cache_free+0x19c/0x508
   event_file_put+0xa0/0x120
   remove_event_file_dir+0x180/0x320
   event_trace_del_tracer+0xb0/0x180
   __remove_instance+0x224/0x508
   instance_rmdir+0x44/0x78
   tracefs_syscall_rmdir+0xbc/0x140
   vfs_rmdir+0x1cc/0x4c8
   do_rmdir+0x220/0x2b8
   __arm64_sys_unlinkat+0xc0/0x100
   invoke_syscall+0x70/0x260
   el0_svc_common.constprop.0+0xb0/0x280
   do_el0_svc+0x44/0x60
   el0_svc+0x34/0x68
   el0t_64_sync_handler+0xb8/0xc0
   el0t_64_sync+0x168/0x170

Link: 
https://lore.kernel.org/linux-trace-kernel/20231214012153.676155-1-zhengyeji...@huawei.com

Suggested-by: Steven Rostedt 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c |  6 ++
 kernel/trace/trace.h |  1 +
 kernel/trace/trace_events_hist.c | 12 
 3 files changed, 15 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b7d0dfb04ae5d..d25fc3a756edc 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4976,6 +4976,12 @@ int tracing_release_file_tr(struct inode *inode, struct 
file *filp)
return 0;
 }
 
+int tracing_single_release_file_tr(struct inode *inode, struct file *filp)
+{
+   tracing_release_file_tr(inode, filp);
+   return single_release(inode, filp);
+}
+
 static int tracing_mark_open(struct inode *inode, struct file *filp)
 {
stream_open(inode, filp);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index d608f61287043..51c0a970339e2 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -612,6 +612,7 @@ int tracing_open_generic(struct inode *inode, struct file 
*filp);
 int tracing_open_generic_tr(struct inode *inode, struct file *filp);
 int tracing_open_file_tr(struct inode *inode, struct file *filp);
 int tracing_release_file_tr(struct inode *inode, struct file *filp);
+int tracing_single_release_file_tr(struct inode *inode, struct file *filp);
 bool tracing_is_disabled(void);
 bool tracer_tracing_is_on(struct trace_array *tr);
 void tracer_tracing_on(struct trace_array *tr);
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index d06938ae07174..68aaf0bd7a78d 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5630,10 +5630,12 @@ static int event_hist_open(struct inode *inode, struct 
file *file)
 {
int ret;
 
-   ret = security_locked_down(LOCKDOWN_TRACEFS);
+   ret = tracing_open_file_tr(inode, file);
if (ret)
return ret;
 
+   /* Clear private_data to avoid warning in single_open() */
+   file->private_data = NULL;
return single_open(file, hist_show, file);
 }
 
@@ -5641,7 +5643,7 @@ const struct file_operations event_hist_fops = {
.open = event_hist_open,
.read = seq_read,
.llseek = seq_lseek,
-   .rele

[PATCH AUTOSEL 6.6 14/18] tracing: Add size check when printing trace_marker output

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 60be76eeabb3d83858cc6577fc65c7d0f36ffd42 ]

If for some reason the trace_marker write does not have a nul byte for the
string, it will overflow the print:

  trace_seq_printf(s, ": %s", field->buf);

The field->buf could be missing the nul byte. To prevent overflow, add the
max size that the buf can be by using the event size and the field
location.

  int max = iter->ent_size - offsetof(struct print_entry, buf);

  trace_seq_printf(s, ": %*.s", max, field->buf);

Link: 
https://lore.kernel.org/linux-trace-kernel/2023121208.4619b...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_output.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
index db575094c4982..3b7d3e9eb6ea4 100644
--- a/kernel/trace/trace_output.c
+++ b/kernel/trace/trace_output.c
@@ -1587,11 +1587,12 @@ static enum print_line_t trace_print_print(struct 
trace_iterator *iter,
 {
struct print_entry *field;
struct trace_seq *s = >seq;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
seq_print_ip_sym(s, field->ip, flags);
-   trace_seq_printf(s, ": %s", field->buf);
+   trace_seq_printf(s, ": %.*s", max, field->buf);
 
return trace_handle_return(s);
 }
@@ -1600,10 +1601,11 @@ static enum print_line_t trace_print_raw(struct 
trace_iterator *iter, int flags,
 struct trace_event *event)
 {
struct print_entry *field;
+   int max = iter->ent_size - offsetof(struct print_entry, buf);
 
trace_assign_type(field, iter->ent);
 
-   trace_seq_printf(>seq, "# %lx %s", field->ip, field->buf);
+   trace_seq_printf(>seq, "# %lx %.*s", field->ip, max, field->buf);
 
return trace_handle_return(>seq);
 }
-- 
2.43.0




[PATCH AUTOSEL 6.6 13/18] tracing: Have large events show up as '[LINE TOO BIG]' instead of nothing

2023-12-18 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit b55b0a0d7c4aa2dac3579aa7e6802d1f57445096 ]

If a large event was added to the ring buffer that is larger than what the
trace_seq can handle, it just drops the output:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-859 [001] .   141.118951: tracing_mark_write  
 <...>-859 [001] .   141.148201: tracing_mark_write: 78901234

Instead, catch this case and add some context:

 ~# cat /sys/kernel/tracing/trace
 # tracer: nop
 #
 # entries-in-buffer/entries-written: 2/2   #P:8
 #
 #_-=> irqs-off/BH-disabled
 #   / _=> need-resched
 #  | / _---=> hardirq/softirq
 #  || / _--=> preempt-depth
 #  ||| / _-=> migrate-disable
 #   / delay
 #   TASK-PID CPU#  |  TIMESTAMP  FUNCTION
 #  | | |   | | |
<...>-852 [001] .   121.550551: tracing_mark_write[LINE TOO 
BIG]
<...>-852 [001] .   121.550581: tracing_mark_write: 78901234

This now emulates the same output as trace_pipe.

Link: 
https://lore.kernel.org/linux-trace-kernel/20231209171058.78c1a...@gandalf.local.home

Cc: Mark Rutland 
Cc: Mathieu Desnoyers 
Reviewed-by: Masami Hiramatsu (Google) 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index b656cab67f67e..b7d0dfb04ae5d 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -4730,7 +4730,11 @@ static int s_show(struct seq_file *m, void *v)
iter->leftover = ret;
 
} else {
-   print_trace_line(iter);
+   ret = print_trace_line(iter);
+   if (ret == TRACE_TYPE_PARTIAL_LINE) {
+   iter->seq.full = 0;
+   trace_seq_puts(>seq, "[LINE TOO BIG]\n");
+   }
ret = trace_print_seq(m, >seq);
/*
 * If we overflow the seq_file buffer, then it will
-- 
2.43.0




[PATCH AUTOSEL 6.6 26/47] pds_vdpa: set features order

2023-12-11 Thread Sasha Levin
From: Shannon Nelson 

[ Upstream commit cefc9ba6aed48a3aa085888e3262ac2aa975714b ]

Fix up the order that the device and negotiated features
are checked to get a more reliable difference when things
get changed.

Signed-off-by: Shannon Nelson 
Message-Id: <20231110221802.46841-4-shannon.nel...@amd.com>
Signed-off-by: Michael S. Tsirkin 
Acked-by: Jason Wang 
Signed-off-by: Sasha Levin 
---
 drivers/vdpa/pds/vdpa_dev.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
index 9fc89c82d1f01..25c0fe5ec3d5d 100644
--- a/drivers/vdpa/pds/vdpa_dev.c
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -318,9 +318,8 @@ static int pds_vdpa_set_driver_features(struct vdpa_device 
*vdpa_dev, u64 featur
return -EOPNOTSUPP;
}
 
-   pdsv->negotiated_features = nego_features;
-
driver_features = pds_vdpa_get_driver_features(vdpa_dev);
+   pdsv->negotiated_features = nego_features;
dev_dbg(dev, "%s: %#llx => %#llx\n",
__func__, driver_features, nego_features);
 
-- 
2.42.0




[PATCH AUTOSEL 6.6 24/47] pds_vdpa: fix up format-truncation complaint

2023-12-11 Thread Sasha Levin
From: Shannon Nelson 

[ Upstream commit 4f317d6529d7fc3ab7769ef89645d43fc7eec61b ]

Our friendly kernel test robot has recently been pointing out
some format-truncation issues.  Here's a fix for one of them.

Reported-by: kernel test robot 
Closes: 
https://lore.kernel.org/oe-kbuild-all/202311040109.rfgjoe7l-...@intel.com/
Signed-off-by: Shannon Nelson 
Message-Id: <20231110221802.46841-2-shannon.nel...@amd.com>
Signed-off-by: Michael S. Tsirkin 
Acked-by: Jason Wang 
Signed-off-by: Sasha Levin 
---
 drivers/vdpa/pds/debugfs.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/vdpa/pds/debugfs.c b/drivers/vdpa/pds/debugfs.c
index 9b04aad6ec35d..c328e694f6e7f 100644
--- a/drivers/vdpa/pds/debugfs.c
+++ b/drivers/vdpa/pds/debugfs.c
@@ -261,7 +261,7 @@ void pds_vdpa_debugfs_add_vdpadev(struct pds_vdpa_aux 
*vdpa_aux)
debugfs_create_file("config", 0400, vdpa_aux->dentry, vdpa_aux->pdsv, 
_fops);
 
for (i = 0; i < vdpa_aux->pdsv->num_vqs; i++) {
-   char name[8];
+   char name[16];
 
snprintf(name, sizeof(name), "vq%02d", i);
debugfs_create_file(name, 0400, vdpa_aux->dentry,
-- 
2.42.0




[PATCH AUTOSEL 6.6 25/47] pds_vdpa: clear config callback when status goes to 0

2023-12-11 Thread Sasha Levin
From: Shannon Nelson 

[ Upstream commit dd3b8de16e90c5594eddd29aeeb99e97c6f863be ]

If the client driver is setting status to 0, something is
getting shutdown and possibly removed.  Make sure we clear
the config_cb so that it doesn't end up crashing when
trying to call a bogus callback.

Signed-off-by: Shannon Nelson 
Message-Id: <20231110221802.46841-3-shannon.nel...@amd.com>
Signed-off-by: Michael S. Tsirkin 
Acked-by: Jason Wang 
Signed-off-by: Sasha Levin 
---
 drivers/vdpa/pds/vdpa_dev.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/vdpa/pds/vdpa_dev.c b/drivers/vdpa/pds/vdpa_dev.c
index 52b2449182ad7..9fc89c82d1f01 100644
--- a/drivers/vdpa/pds/vdpa_dev.c
+++ b/drivers/vdpa/pds/vdpa_dev.c
@@ -461,8 +461,10 @@ static void pds_vdpa_set_status(struct vdpa_device 
*vdpa_dev, u8 status)
 
pds_vdpa_cmd_set_status(pdsv, status);
 
-   /* Note: still working with FW on the need for this reset cmd */
if (status == 0) {
+   struct vdpa_callback null_cb = { };
+
+   pds_vdpa_set_config_cb(vdpa_dev, _cb);
pds_vdpa_cmd_reset(pdsv);
 
for (i = 0; i < pdsv->num_vqs; i++) {
-- 
2.42.0




[PATCH AUTOSEL 6.6 35/40] eventfs: Do not allow NULL parent to eventfs_start_creating()

2023-11-28 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit fc4561226feaad5fcdcb55646c348d77b8ee69c5 ]

The eventfs directory is dynamically created via the meta data supplied by
the existing trace events. All files and directories in eventfs has a
parent. Do not allow NULL to be passed into eventfs_start_creating() as
the parent because that should never happen. Warn if it does.

Link: https://lkml.kernel.org/r/20231121231112.693841...@goodmis.org

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Cc: Andrew Morton 
Reviewed-by: Josef Bacik 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 fs/tracefs/inode.c | 13 -
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
index 891653ba9cf35..0292c6a2bed9f 100644
--- a/fs/tracefs/inode.c
+++ b/fs/tracefs/inode.c
@@ -509,20 +509,15 @@ struct dentry *eventfs_start_creating(const char *name, 
struct dentry *parent)
struct dentry *dentry;
int error;
 
+   /* Must always have a parent. */
+   if (WARN_ON_ONCE(!parent))
+   return ERR_PTR(-EINVAL);
+
error = simple_pin_fs(_fs_type, _mount,
  _mount_count);
if (error)
return ERR_PTR(error);
 
-   /*
-* If the parent is not specified, we create it in the root.
-* We need the root dentry to do this, which is in the super
-* block. A pointer to that is in the struct vfsmount that we
-* have around.
-*/
-   if (!parent)
-   parent = tracefs_mount->mnt_root;
-
if (unlikely(IS_DEADDIR(parent->d_inode)))
dentry = ERR_PTR(-ENOENT);
else
-- 
2.42.0




[PATCH AUTOSEL 6.1 5/6] vhost-vdpa: clean iotlb map during reset for older userspace

2023-11-12 Thread Sasha Levin
From: Si-Wei Liu 

[ Upstream commit bc91df5c70ac720eca18bd1f4a288f2582713d3e ]

Using .compat_reset op from the previous patch, the buggy .reset
behaviour can be kept as-is on older userspace apps, which don't ack the
IOTLB_PERSIST backend feature. As this compatibility quirk is limited to
those drivers that used to be buggy in the past, it won't affect change
the behaviour or affect ABI on the setups with API compliant driver.

The separation of .compat_reset from the regular .reset allows
vhost-vdpa able to know which driver had broken behaviour before, so it
can apply the corresponding compatibility quirk to the individual driver
whenever needed.  Compared to overloading the existing .reset with
flags, .compat_reset won't cause any extra burden to the implementation
of every compliant driver.

[mst: squashed in two fixup commits]

Message-Id: <1697880319-4937-6-git-send-email-si-wei@oracle.com>
Message-Id: <1698102863-21122-1-git-send-email-si-wei@oracle.com>
Reported-by: Dragos Tatulea 
Tested-by: Dragos Tatulea 
Message-Id: <1698275594-19204-1-git-send-email-si-wei@oracle.com>
Reported-by: Lei Yang 
Signed-off-by: Si-Wei Liu 
Signed-off-by: Michael S. Tsirkin 
Tested-by: Lei Yang 
Signed-off-by: Sasha Levin 
---
 drivers/vhost/vdpa.c | 20 
 drivers/virtio/virtio_vdpa.c |  2 +-
 include/linux/vdpa.h |  7 +--
 3 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 31a156669a531..9e60a1d3b8166 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -210,13 +210,24 @@ static void vhost_vdpa_unsetup_vq_irq(struct vhost_vdpa 
*v, u16 qid)
irq_bypass_unregister_producer(>call_ctx.producer);
 }
 
-static int vhost_vdpa_reset(struct vhost_vdpa *v)
+static int _compat_vdpa_reset(struct vhost_vdpa *v)
 {
struct vdpa_device *vdpa = v->vdpa;
+   u32 flags = 0;
 
-   v->in_batch = 0;
+   if (v->vdev.vqs) {
+   flags |= !vhost_backend_has_feature(v->vdev.vqs[0],
+   
VHOST_BACKEND_F_IOTLB_PERSIST) ?
+VDPA_RESET_F_CLEAN_MAP : 0;
+   }
+
+   return vdpa_reset(vdpa, flags);
+}
 
-   return vdpa_reset(vdpa);
+static int vhost_vdpa_reset(struct vhost_vdpa *v)
+{
+   v->in_batch = 0;
+   return _compat_vdpa_reset(v);
 }
 
 static long vhost_vdpa_get_device_id(struct vhost_vdpa *v, u8 __user *argp)
@@ -273,7 +284,7 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 
__user *statusp)
vhost_vdpa_unsetup_vq_irq(v, i);
 
if (status == 0) {
-   ret = vdpa_reset(vdpa);
+   ret = _compat_vdpa_reset(v);
if (ret)
return ret;
} else
@@ -1202,6 +1213,7 @@ static void vhost_vdpa_cleanup(struct vhost_vdpa *v)
vhost_vdpa_free_domain(v);
vhost_dev_cleanup(>vdev);
kfree(v->vdev.vqs);
+   v->vdev.vqs = NULL;
 }
 
 static int vhost_vdpa_open(struct inode *inode, struct file *filep)
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index 9670cc79371d8..73d052b4fe4d2 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -99,7 +99,7 @@ static void virtio_vdpa_reset(struct virtio_device *vdev)
 {
struct vdpa_device *vdpa = vd_get_vdpa(vdev);
 
-   vdpa_reset(vdpa);
+   vdpa_reset(vdpa, 0);
 }
 
 static bool virtio_vdpa_notify(struct virtqueue *vq)
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 6d0f5e4e82c25..2737f951fbd8f 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -427,14 +427,17 @@ static inline struct device *vdpa_get_dma_dev(struct 
vdpa_device *vdev)
return vdev->dma_dev;
 }
 
-static inline int vdpa_reset(struct vdpa_device *vdev)
+static inline int vdpa_reset(struct vdpa_device *vdev, u32 flags)
 {
const struct vdpa_config_ops *ops = vdev->config;
int ret;
 
down_write(>cf_lock);
vdev->features_valid = false;
-   ret = ops->reset(vdev);
+   if (ops->compat_reset && flags)
+   ret = ops->compat_reset(vdev, flags);
+   else
+   ret = ops->reset(vdev);
up_write(>cf_lock);
return ret;
 }
-- 
2.42.0




[PATCH AUTOSEL 6.5 6/7] vhost-vdpa: clean iotlb map during reset for older userspace

2023-11-12 Thread Sasha Levin
From: Si-Wei Liu 

[ Upstream commit bc91df5c70ac720eca18bd1f4a288f2582713d3e ]

Using .compat_reset op from the previous patch, the buggy .reset
behaviour can be kept as-is on older userspace apps, which don't ack the
IOTLB_PERSIST backend feature. As this compatibility quirk is limited to
those drivers that used to be buggy in the past, it won't affect change
the behaviour or affect ABI on the setups with API compliant driver.

The separation of .compat_reset from the regular .reset allows
vhost-vdpa able to know which driver had broken behaviour before, so it
can apply the corresponding compatibility quirk to the individual driver
whenever needed.  Compared to overloading the existing .reset with
flags, .compat_reset won't cause any extra burden to the implementation
of every compliant driver.

[mst: squashed in two fixup commits]

Message-Id: <1697880319-4937-6-git-send-email-si-wei@oracle.com>
Message-Id: <1698102863-21122-1-git-send-email-si-wei@oracle.com>
Reported-by: Dragos Tatulea 
Tested-by: Dragos Tatulea 
Message-Id: <1698275594-19204-1-git-send-email-si-wei@oracle.com>
Reported-by: Lei Yang 
Signed-off-by: Si-Wei Liu 
Signed-off-by: Michael S. Tsirkin 
Tested-by: Lei Yang 
Signed-off-by: Sasha Levin 
---
 drivers/vhost/vdpa.c | 20 
 drivers/virtio/virtio_vdpa.c |  2 +-
 include/linux/vdpa.h |  7 +--
 3 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index b43e8680eee8d..fb934a7e68bfb 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -210,13 +210,24 @@ static void vhost_vdpa_unsetup_vq_irq(struct vhost_vdpa 
*v, u16 qid)
irq_bypass_unregister_producer(>call_ctx.producer);
 }
 
-static int vhost_vdpa_reset(struct vhost_vdpa *v)
+static int _compat_vdpa_reset(struct vhost_vdpa *v)
 {
struct vdpa_device *vdpa = v->vdpa;
+   u32 flags = 0;
 
-   v->in_batch = 0;
+   if (v->vdev.vqs) {
+   flags |= !vhost_backend_has_feature(v->vdev.vqs[0],
+   
VHOST_BACKEND_F_IOTLB_PERSIST) ?
+VDPA_RESET_F_CLEAN_MAP : 0;
+   }
+
+   return vdpa_reset(vdpa, flags);
+}
 
-   return vdpa_reset(vdpa);
+static int vhost_vdpa_reset(struct vhost_vdpa *v)
+{
+   v->in_batch = 0;
+   return _compat_vdpa_reset(v);
 }
 
 static long vhost_vdpa_bind_mm(struct vhost_vdpa *v)
@@ -295,7 +306,7 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 
__user *statusp)
vhost_vdpa_unsetup_vq_irq(v, i);
 
if (status == 0) {
-   ret = vdpa_reset(vdpa);
+   ret = _compat_vdpa_reset(v);
if (ret)
return ret;
} else
@@ -1272,6 +1283,7 @@ static void vhost_vdpa_cleanup(struct vhost_vdpa *v)
vhost_vdpa_free_domain(v);
vhost_dev_cleanup(>vdev);
kfree(v->vdev.vqs);
+   v->vdev.vqs = NULL;
 }
 
 static int vhost_vdpa_open(struct inode *inode, struct file *filep)
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index 06ce6d8c2e004..8d63e5923d245 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -100,7 +100,7 @@ static void virtio_vdpa_reset(struct virtio_device *vdev)
 {
struct vdpa_device *vdpa = vd_get_vdpa(vdev);
 
-   vdpa_reset(vdpa);
+   vdpa_reset(vdpa, 0);
 }
 
 static bool virtio_vdpa_notify(struct virtqueue *vq)
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index db1b0eaef4eb7..c287382b0a80b 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -481,14 +481,17 @@ static inline struct device *vdpa_get_dma_dev(struct 
vdpa_device *vdev)
return vdev->dma_dev;
 }
 
-static inline int vdpa_reset(struct vdpa_device *vdev)
+static inline int vdpa_reset(struct vdpa_device *vdev, u32 flags)
 {
const struct vdpa_config_ops *ops = vdev->config;
int ret;
 
down_write(>cf_lock);
vdev->features_valid = false;
-   ret = ops->reset(vdev);
+   if (ops->compat_reset && flags)
+   ret = ops->compat_reset(vdev, flags);
+   else
+   ret = ops->reset(vdev);
up_write(>cf_lock);
return ret;
 }
-- 
2.42.0




[PATCH AUTOSEL 6.6 6/7] vhost-vdpa: clean iotlb map during reset for older userspace

2023-11-12 Thread Sasha Levin
From: Si-Wei Liu 

[ Upstream commit bc91df5c70ac720eca18bd1f4a288f2582713d3e ]

Using .compat_reset op from the previous patch, the buggy .reset
behaviour can be kept as-is on older userspace apps, which don't ack the
IOTLB_PERSIST backend feature. As this compatibility quirk is limited to
those drivers that used to be buggy in the past, it won't affect change
the behaviour or affect ABI on the setups with API compliant driver.

The separation of .compat_reset from the regular .reset allows
vhost-vdpa able to know which driver had broken behaviour before, so it
can apply the corresponding compatibility quirk to the individual driver
whenever needed.  Compared to overloading the existing .reset with
flags, .compat_reset won't cause any extra burden to the implementation
of every compliant driver.

[mst: squashed in two fixup commits]

Message-Id: <1697880319-4937-6-git-send-email-si-wei@oracle.com>
Message-Id: <1698102863-21122-1-git-send-email-si-wei@oracle.com>
Reported-by: Dragos Tatulea 
Tested-by: Dragos Tatulea 
Message-Id: <1698275594-19204-1-git-send-email-si-wei@oracle.com>
Reported-by: Lei Yang 
Signed-off-by: Si-Wei Liu 
Signed-off-by: Michael S. Tsirkin 
Tested-by: Lei Yang 
Signed-off-by: Sasha Levin 
---
 drivers/vhost/vdpa.c | 20 
 drivers/virtio/virtio_vdpa.c |  2 +-
 include/linux/vdpa.h |  7 +--
 3 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 78379ffd23363..183cec8305e3e 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -210,13 +210,24 @@ static void vhost_vdpa_unsetup_vq_irq(struct vhost_vdpa 
*v, u16 qid)
irq_bypass_unregister_producer(>call_ctx.producer);
 }
 
-static int vhost_vdpa_reset(struct vhost_vdpa *v)
+static int _compat_vdpa_reset(struct vhost_vdpa *v)
 {
struct vdpa_device *vdpa = v->vdpa;
+   u32 flags = 0;
 
-   v->in_batch = 0;
+   if (v->vdev.vqs) {
+   flags |= !vhost_backend_has_feature(v->vdev.vqs[0],
+   
VHOST_BACKEND_F_IOTLB_PERSIST) ?
+VDPA_RESET_F_CLEAN_MAP : 0;
+   }
+
+   return vdpa_reset(vdpa, flags);
+}
 
-   return vdpa_reset(vdpa);
+static int vhost_vdpa_reset(struct vhost_vdpa *v)
+{
+   v->in_batch = 0;
+   return _compat_vdpa_reset(v);
 }
 
 static long vhost_vdpa_bind_mm(struct vhost_vdpa *v)
@@ -295,7 +306,7 @@ static long vhost_vdpa_set_status(struct vhost_vdpa *v, u8 
__user *statusp)
vhost_vdpa_unsetup_vq_irq(v, i);
 
if (status == 0) {
-   ret = vdpa_reset(vdpa);
+   ret = _compat_vdpa_reset(v);
if (ret)
return ret;
} else
@@ -1285,6 +1296,7 @@ static void vhost_vdpa_cleanup(struct vhost_vdpa *v)
vhost_vdpa_free_domain(v);
vhost_dev_cleanup(>vdev);
kfree(v->vdev.vqs);
+   v->vdev.vqs = NULL;
 }
 
 static int vhost_vdpa_open(struct inode *inode, struct file *filep)
diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c
index 06ce6d8c2e004..8d63e5923d245 100644
--- a/drivers/virtio/virtio_vdpa.c
+++ b/drivers/virtio/virtio_vdpa.c
@@ -100,7 +100,7 @@ static void virtio_vdpa_reset(struct virtio_device *vdev)
 {
struct vdpa_device *vdpa = vd_get_vdpa(vdev);
 
-   vdpa_reset(vdpa);
+   vdpa_reset(vdpa, 0);
 }
 
 static bool virtio_vdpa_notify(struct virtqueue *vq)
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 0e652026b776f..3e1af63803e55 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -485,14 +485,17 @@ static inline struct device *vdpa_get_dma_dev(struct 
vdpa_device *vdev)
return vdev->dma_dev;
 }
 
-static inline int vdpa_reset(struct vdpa_device *vdev)
+static inline int vdpa_reset(struct vdpa_device *vdev, u32 flags)
 {
const struct vdpa_config_ops *ops = vdev->config;
int ret;
 
down_write(>cf_lock);
vdev->features_valid = false;
-   ret = ops->reset(vdev);
+   if (ops->compat_reset && flags)
+   ret = ops->compat_reset(vdev, flags);
+   else
+   ret = ops->reset(vdev);
up_write(>cf_lock);
return ret;
 }
-- 
2.42.0




[PATCH AUTOSEL 4.14 7/7] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 37be6913cfb27..f29552b009c80 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2240,6 +2240,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_printk(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 4.19 8/8] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index a3dc6c126b3ee..ed39d3ec202e6 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2242,6 +2242,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_printk(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 5.4 8/8] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 0c21da12b650c..09fb9b0e38d75 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2234,6 +2234,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_printk(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 5.10 8/8] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index a46d34d840f69..1221b11ea0098 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2449,6 +2449,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_printk(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 5.15 10/10] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 2a2a57671..e6aef0066ccb8 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2751,6 +2751,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_fields(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 6.1 12/12] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 9da418442a063..2e3dce5e2575e 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2777,6 +2777,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_fields(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 6.5 18/18] tracing: relax trace_event_eval_update() execution with cond_resched()

2023-10-07 Thread Sasha Levin
From: Clément Léger 

[ Upstream commit 23cce5f25491968b23fb9c399bbfb25f13870cd9 ]

When kernel is compiled without preemption, the eval_map_work_func()
(which calls trace_event_eval_update()) will not be preempted up to its
complete execution. This can actually cause a problem since if another
CPU call stop_machine(), the call will have to wait for the
eval_map_work_func() function to finish executing in the workqueue
before being able to be scheduled. This problem was observe on a SMP
system at boot time, when the CPU calling the initcalls executed
clocksource_done_booting() which in the end calls stop_machine(). We
observed a 1 second delay because one CPU was executing
eval_map_work_func() and was not preempted by the stop_machine() task.

Adding a call to cond_resched() in trace_event_eval_update() allows
other tasks to be executed and thus continue working asynchronously
like before without blocking any pending task at boot time.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230929191637.416931-1-cle...@rivosinc.com

Cc: Masami Hiramatsu 
Signed-off-by: Clément Léger 
Tested-by: Atish Patra 
Reviewed-by: Atish Patra 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/trace_events.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 0cf84a7449f5b..9841589b4af7f 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2777,6 +2777,7 @@ void trace_event_eval_update(struct trace_eval_map **map, 
int len)
update_event_fields(call, map[i]);
}
}
+   cond_resched();
}
up_write(_event_sem);
 }
-- 
2.40.1




[PATCH AUTOSEL 4.14 5/6] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 1949d7bbe145d..f0d4ff2db2ef0 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1686,6 +1686,8 @@ int ring_buffer_resize(struct ring_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
get_online_cpus();
-- 
2.40.1




[PATCH AUTOSEL 4.19 6/7] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index c8a7de7a1d635..320aa60664dc9 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1753,6 +1753,8 @@ int ring_buffer_resize(struct ring_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
get_online_cpus();
-- 
2.40.1




[PATCH AUTOSEL 5.4 6/7] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 445475c229b3a..2a4fb4f1e3cad 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1821,6 +1821,8 @@ int ring_buffer_resize(struct ring_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
get_online_cpus();
-- 
2.40.1




[PATCH AUTOSEL 5.10 09/13] ring-buffer: Do not attempt to read past "commit"

2023-09-24 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]

When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last event was at the
very end of the page and has only 4 bytes left.

The checks to detect the corruption by the writer to reads needs to see the
length of the event. If the length in the first 4 bytes is zero then the
length is stored in the second 4 bytes. But if the writer is in the process
of updating that code, there's a small window where the length in the first
4 bytes could be zero even though the length is only 4 bytes. That will
cause rb_event_length() to read the next 4 bytes which could happen to be off 
the
allocated page.

To protect against this, fail immediately if the next event pointer is
less than 8 bytes from the end of the commit (last byte of data), as all
events must be a minimum of 8 bytes anyway.

Link: 
https://lore.kernel.org/all/20230905141245.26470-1-tze-nan...@mediatek.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20230907122820.08990...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Reported-by: Tze-nan Wu 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 752e9549a59e8..812ec380da820 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2260,6 +2260,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
 */
commit = rb_page_commit(iter_head_page);
smp_rmb();
+
+   /* An event needs to be at least 8 bytes in size */
+   if (iter->head > commit - 8)
+   goto reset;
+
event = __rb_page_index(iter_head_page, iter->head);
length = rb_event_length(event);
 
-- 
2.40.1




[PATCH AUTOSEL 5.10 07/13] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index f8126fa0630e2..752e9549a59e8 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2080,6 +2080,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
get_online_cpus();
-- 
2.40.1




[PATCH AUTOSEL 5.15 08/18] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index db7cefd196cec..b15d72284c7f7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2176,6 +2176,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
cpus_read_lock();
-- 
2.40.1




[PATCH AUTOSEL 5.15 11/18] ring-buffer: Do not attempt to read past "commit"

2023-09-24 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]

When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last event was at the
very end of the page and has only 4 bytes left.

The checks to detect the corruption by the writer to reads needs to see the
length of the event. If the length in the first 4 bytes is zero then the
length is stored in the second 4 bytes. But if the writer is in the process
of updating that code, there's a small window where the length in the first
4 bytes could be zero even though the length is only 4 bytes. That will
cause rb_event_length() to read the next 4 bytes which could happen to be off 
the
allocated page.

To protect against this, fail immediately if the next event pointer is
less than 8 bytes from the end of the commit (last byte of data), as all
events must be a minimum of 8 bytes anyway.

Link: 
https://lore.kernel.org/all/20230905141245.26470-1-tze-nan...@mediatek.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20230907122820.08990...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Reported-by: Tze-nan Wu 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index b15d72284c7f7..69db849ae7dad 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2352,6 +2352,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
 */
commit = rb_page_commit(iter_head_page);
smp_rmb();
+
+   /* An event needs to be at least 8 bytes in size */
+   if (iter->head > commit - 8)
+   goto reset;
+
event = __rb_page_index(iter_head_page, iter->head);
length = rb_event_length(event);
 
-- 
2.40.1




[PATCH AUTOSEL 6.1 14/28] ring-buffer: Do not attempt to read past "commit"

2023-09-24 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]

When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last event was at the
very end of the page and has only 4 bytes left.

The checks to detect the corruption by the writer to reads needs to see the
length of the event. If the length in the first 4 bytes is zero then the
length is stored in the second 4 bytes. But if the writer is in the process
of updating that code, there's a small window where the length in the first
4 bytes could be zero even though the length is only 4 bytes. That will
cause rb_event_length() to read the next 4 bytes which could happen to be off 
the
allocated page.

To protect against this, fail immediately if the next event pointer is
less than 8 bytes from the end of the commit (last byte of data), as all
events must be a minimum of 8 bytes anyway.

Link: 
https://lore.kernel.org/all/20230905141245.26470-1-tze-nan...@mediatek.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20230907122820.08990...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Reported-by: Tze-nan Wu 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 42ad59a002365..c0b708b55c3b9 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2388,6 +2388,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
 */
commit = rb_page_commit(iter_head_page);
smp_rmb();
+
+   /* An event needs to be at least 8 bytes in size */
+   if (iter->head > commit - 8)
+   goto reset;
+
event = __rb_page_index(iter_head_page, iter->head);
length = rb_event_length(event);
 
-- 
2.40.1




[PATCH AUTOSEL 6.1 10/28] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index de55107aef5d5..42ad59a002365 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2212,6 +2212,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
cpus_read_lock();
-- 
2.40.1




[PATCH AUTOSEL 6.5 17/41] ring-buffer: Do not attempt to read past "commit"

2023-09-24 Thread Sasha Levin
From: "Steven Rostedt (Google)" 

[ Upstream commit 95a404bd60af6c4d9d8db01ad14fe8957ece31ca ]

When iterating over the ring buffer while the ring buffer is active, the
writer can corrupt the reader. There's barriers to help detect this and
handle it, but that code missed the case where the last event was at the
very end of the page and has only 4 bytes left.

The checks to detect the corruption by the writer to reads needs to see the
length of the event. If the length in the first 4 bytes is zero then the
length is stored in the second 4 bytes. But if the writer is in the process
of updating that code, there's a small window where the length in the first
4 bytes could be zero even though the length is only 4 bytes. That will
cause rb_event_length() to read the next 4 bytes which could happen to be off 
the
allocated page.

To protect against this, fail immediately if the next event pointer is
less than 8 bytes from the end of the commit (last byte of data), as all
events must be a minimum of 8 bytes anyway.

Link: 
https://lore.kernel.org/all/20230905141245.26470-1-tze-nan...@mediatek.com/
Link: 
https://lore.kernel.org/linux-trace-kernel/20230907122820.08990...@gandalf.local.home

Cc: Masami Hiramatsu 
Cc: Mark Rutland 
Reported-by: Tze-nan Wu 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 1267e1016ab5c..53b73b85cf737 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2398,6 +2398,11 @@ rb_iter_head_event(struct ring_buffer_iter *iter)
 */
commit = rb_page_commit(iter_head_page);
smp_rmb();
+
+   /* An event needs to be at least 8 bytes in size */
+   if (iter->head > commit - 8)
+   goto reset;
+
event = __rb_page_index(iter_head_page, iter->head);
length = rb_event_length(event);
 
-- 
2.40.1




[PATCH AUTOSEL 6.5 12/41] ring-buffer: Avoid softlockup in ring_buffer_resize()

2023-09-24 Thread Sasha Levin
From: Zheng Yejian 

[ Upstream commit f6bd2c92488c30ef53b5bd80c52f0a7eee9d545a ]

When user resize all trace ring buffer through file 'buffer_size_kb',
then in ring_buffer_resize(), kernel allocates buffer pages for each
cpu in a loop.

If the kernel preemption model is PREEMPT_NONE and there are many cpus
and there are many buffer pages to be allocated, it may not give up cpu
for a long time and finally cause a softlockup.

To avoid it, call cond_resched() after each cpu buffer allocation.

Link: 
https://lore.kernel.org/linux-trace-kernel/20230906081930.3939106-1-zhengyeji...@huawei.com

Cc: 
Signed-off-by: Zheng Yejian 
Signed-off-by: Steven Rostedt (Google) 
Signed-off-by: Sasha Levin 
---
 kernel/trace/ring_buffer.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 52dea5dd5362e..1267e1016ab5c 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -2206,6 +2206,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, 
unsigned long size,
err = -ENOMEM;
goto out_err;
}
+
+   cond_resched();
}
 
cpus_read_lock();
-- 
2.40.1




[PATCH AUTOSEL 4.19 2/4] ACPI: NFIT: fix a potential deadlock during NFIT teardown

2023-02-15 Thread Sasha Levin
From: Vishal Verma 

[ Upstream commit fb6df4366f86dd252bfa3049edffa52d17e7b895 ]

Lockdep reports that acpi_nfit_shutdown() may deadlock against an
opportune acpi_nfit_scrub(). acpi_nfit_scrub () is run from inside a
'work' and therefore has already acquired workqueue-internal locks. It
also acquiires acpi_desc->init_mutex. acpi_nfit_shutdown() first
acquires init_mutex, and was subsequently attempting to cancel any
pending workqueue items. This reversed locking order causes a potential
deadlock:

==
WARNING: possible circular locking dependency detected
6.2.0-rc3 #116 Tainted: G   O N
--
libndctl/1958 is trying to acquire lock:
888129b461c0 
((work_completion)(&(_desc->dwork)->work)){+.+.}-{0:0}, at: 
__flush_work+0x43/0x450

but task is already holding lock:
888129b460e8 (_desc->init_mutex){+.+.}-{3:3}, at: 
acpi_nfit_shutdown+0x87/0xd0 [nfit]

which lock already depends on the new lock.

...

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(_desc->init_mutex);
  
lock((work_completion)(&(_desc->dwork)->work));
  lock(_desc->init_mutex);
 lock((work_completion)(&(_desc->dwork)->work));

*** DEADLOCK ***

Since the workqueue manipulation is protected by its own internal locking,
the cancellation of pending work doesn't need to be done under
acpi_desc->init_mutex. Move cancel_delayed_work_sync() outside the
init_mutex to fix the deadlock. Any work that starts after
acpi_nfit_shutdown() drops the lock will see ARS_CANCEL, and the
cancel_delayed_work_sync() will safely flush it out.

Reported-by: Dan Williams 
Signed-off-by: Vishal Verma 
Link: 
https://lore.kernel.org/r/20230112-acpi_nfit_lockdep-v1-1-660be4dd1...@intel.com
Signed-off-by: Dan Williams 
Signed-off-by: Sasha Levin 
---
 drivers/acpi/nfit/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 58a756ca14d85..c2863eec0f241 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -3442,8 +3442,8 @@ void acpi_nfit_shutdown(void *data)
 
mutex_lock(_desc->init_mutex);
set_bit(ARS_CANCEL, _desc->scrub_flags);
-   cancel_delayed_work_sync(_desc->dwork);
mutex_unlock(_desc->init_mutex);
+   cancel_delayed_work_sync(_desc->dwork);
 
/*
 * Bounce the nvdimm bus lock to make sure any in-flight
-- 
2.39.0




[PATCH AUTOSEL 5.4 3/7] ACPI: NFIT: fix a potential deadlock during NFIT teardown

2023-02-15 Thread Sasha Levin
From: Vishal Verma 

[ Upstream commit fb6df4366f86dd252bfa3049edffa52d17e7b895 ]

Lockdep reports that acpi_nfit_shutdown() may deadlock against an
opportune acpi_nfit_scrub(). acpi_nfit_scrub () is run from inside a
'work' and therefore has already acquired workqueue-internal locks. It
also acquiires acpi_desc->init_mutex. acpi_nfit_shutdown() first
acquires init_mutex, and was subsequently attempting to cancel any
pending workqueue items. This reversed locking order causes a potential
deadlock:

==
WARNING: possible circular locking dependency detected
6.2.0-rc3 #116 Tainted: G   O N
--
libndctl/1958 is trying to acquire lock:
888129b461c0 
((work_completion)(&(_desc->dwork)->work)){+.+.}-{0:0}, at: 
__flush_work+0x43/0x450

but task is already holding lock:
888129b460e8 (_desc->init_mutex){+.+.}-{3:3}, at: 
acpi_nfit_shutdown+0x87/0xd0 [nfit]

which lock already depends on the new lock.

...

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(_desc->init_mutex);
  
lock((work_completion)(&(_desc->dwork)->work));
  lock(_desc->init_mutex);
 lock((work_completion)(&(_desc->dwork)->work));

*** DEADLOCK ***

Since the workqueue manipulation is protected by its own internal locking,
the cancellation of pending work doesn't need to be done under
acpi_desc->init_mutex. Move cancel_delayed_work_sync() outside the
init_mutex to fix the deadlock. Any work that starts after
acpi_nfit_shutdown() drops the lock will see ARS_CANCEL, and the
cancel_delayed_work_sync() will safely flush it out.

Reported-by: Dan Williams 
Signed-off-by: Vishal Verma 
Link: 
https://lore.kernel.org/r/20230112-acpi_nfit_lockdep-v1-1-660be4dd1...@intel.com
Signed-off-by: Dan Williams 
Signed-off-by: Sasha Levin 
---
 drivers/acpi/nfit/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 0fe4f3ed72ca4..793b8d9d749a0 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -3599,8 +3599,8 @@ void acpi_nfit_shutdown(void *data)
 
mutex_lock(_desc->init_mutex);
set_bit(ARS_CANCEL, _desc->scrub_flags);
-   cancel_delayed_work_sync(_desc->dwork);
mutex_unlock(_desc->init_mutex);
+   cancel_delayed_work_sync(_desc->dwork);
 
/*
 * Bounce the nvdimm bus lock to make sure any in-flight
-- 
2.39.0




[PATCH AUTOSEL 5.10 4/8] ACPI: NFIT: fix a potential deadlock during NFIT teardown

2023-02-15 Thread Sasha Levin
From: Vishal Verma 

[ Upstream commit fb6df4366f86dd252bfa3049edffa52d17e7b895 ]

Lockdep reports that acpi_nfit_shutdown() may deadlock against an
opportune acpi_nfit_scrub(). acpi_nfit_scrub () is run from inside a
'work' and therefore has already acquired workqueue-internal locks. It
also acquiires acpi_desc->init_mutex. acpi_nfit_shutdown() first
acquires init_mutex, and was subsequently attempting to cancel any
pending workqueue items. This reversed locking order causes a potential
deadlock:

==
WARNING: possible circular locking dependency detected
6.2.0-rc3 #116 Tainted: G   O N
--
libndctl/1958 is trying to acquire lock:
888129b461c0 
((work_completion)(&(_desc->dwork)->work)){+.+.}-{0:0}, at: 
__flush_work+0x43/0x450

but task is already holding lock:
888129b460e8 (_desc->init_mutex){+.+.}-{3:3}, at: 
acpi_nfit_shutdown+0x87/0xd0 [nfit]

which lock already depends on the new lock.

...

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(_desc->init_mutex);
  
lock((work_completion)(&(_desc->dwork)->work));
  lock(_desc->init_mutex);
 lock((work_completion)(&(_desc->dwork)->work));

*** DEADLOCK ***

Since the workqueue manipulation is protected by its own internal locking,
the cancellation of pending work doesn't need to be done under
acpi_desc->init_mutex. Move cancel_delayed_work_sync() outside the
init_mutex to fix the deadlock. Any work that starts after
acpi_nfit_shutdown() drops the lock will see ARS_CANCEL, and the
cancel_delayed_work_sync() will safely flush it out.

Reported-by: Dan Williams 
Signed-off-by: Vishal Verma 
Link: 
https://lore.kernel.org/r/20230112-acpi_nfit_lockdep-v1-1-660be4dd1...@intel.com
Signed-off-by: Dan Williams 
Signed-off-by: Sasha Levin 
---
 drivers/acpi/nfit/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 99e23a5df0267..2306abb09f7f5 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -3687,8 +3687,8 @@ void acpi_nfit_shutdown(void *data)
 
mutex_lock(_desc->init_mutex);
set_bit(ARS_CANCEL, _desc->scrub_flags);
-   cancel_delayed_work_sync(_desc->dwork);
mutex_unlock(_desc->init_mutex);
+   cancel_delayed_work_sync(_desc->dwork);
 
/*
 * Bounce the nvdimm bus lock to make sure any in-flight
-- 
2.39.0




[PATCH AUTOSEL 5.15 06/12] ACPI: NFIT: fix a potential deadlock during NFIT teardown

2023-02-15 Thread Sasha Levin
From: Vishal Verma 

[ Upstream commit fb6df4366f86dd252bfa3049edffa52d17e7b895 ]

Lockdep reports that acpi_nfit_shutdown() may deadlock against an
opportune acpi_nfit_scrub(). acpi_nfit_scrub () is run from inside a
'work' and therefore has already acquired workqueue-internal locks. It
also acquiires acpi_desc->init_mutex. acpi_nfit_shutdown() first
acquires init_mutex, and was subsequently attempting to cancel any
pending workqueue items. This reversed locking order causes a potential
deadlock:

==
WARNING: possible circular locking dependency detected
6.2.0-rc3 #116 Tainted: G   O N
--
libndctl/1958 is trying to acquire lock:
888129b461c0 
((work_completion)(&(_desc->dwork)->work)){+.+.}-{0:0}, at: 
__flush_work+0x43/0x450

but task is already holding lock:
888129b460e8 (_desc->init_mutex){+.+.}-{3:3}, at: 
acpi_nfit_shutdown+0x87/0xd0 [nfit]

which lock already depends on the new lock.

...

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(_desc->init_mutex);
  
lock((work_completion)(&(_desc->dwork)->work));
  lock(_desc->init_mutex);
 lock((work_completion)(&(_desc->dwork)->work));

*** DEADLOCK ***

Since the workqueue manipulation is protected by its own internal locking,
the cancellation of pending work doesn't need to be done under
acpi_desc->init_mutex. Move cancel_delayed_work_sync() outside the
init_mutex to fix the deadlock. Any work that starts after
acpi_nfit_shutdown() drops the lock will see ARS_CANCEL, and the
cancel_delayed_work_sync() will safely flush it out.

Reported-by: Dan Williams 
Signed-off-by: Vishal Verma 
Link: 
https://lore.kernel.org/r/20230112-acpi_nfit_lockdep-v1-1-660be4dd1...@intel.com
Signed-off-by: Dan Williams 
Signed-off-by: Sasha Levin 
---
 drivers/acpi/nfit/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index 7dd80acf92c78..2575d6c51f898 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -3676,8 +3676,8 @@ void acpi_nfit_shutdown(void *data)
 
mutex_lock(_desc->init_mutex);
set_bit(ARS_CANCEL, _desc->scrub_flags);
-   cancel_delayed_work_sync(_desc->dwork);
mutex_unlock(_desc->init_mutex);
+   cancel_delayed_work_sync(_desc->dwork);
 
/*
 * Bounce the nvdimm bus lock to make sure any in-flight
-- 
2.39.0




[PATCH AUTOSEL 6.1 09/24] ACPI: NFIT: fix a potential deadlock during NFIT teardown

2023-02-15 Thread Sasha Levin
From: Vishal Verma 

[ Upstream commit fb6df4366f86dd252bfa3049edffa52d17e7b895 ]

Lockdep reports that acpi_nfit_shutdown() may deadlock against an
opportune acpi_nfit_scrub(). acpi_nfit_scrub () is run from inside a
'work' and therefore has already acquired workqueue-internal locks. It
also acquiires acpi_desc->init_mutex. acpi_nfit_shutdown() first
acquires init_mutex, and was subsequently attempting to cancel any
pending workqueue items. This reversed locking order causes a potential
deadlock:

==
WARNING: possible circular locking dependency detected
6.2.0-rc3 #116 Tainted: G   O N
--
libndctl/1958 is trying to acquire lock:
888129b461c0 
((work_completion)(&(_desc->dwork)->work)){+.+.}-{0:0}, at: 
__flush_work+0x43/0x450

but task is already holding lock:
888129b460e8 (_desc->init_mutex){+.+.}-{3:3}, at: 
acpi_nfit_shutdown+0x87/0xd0 [nfit]

which lock already depends on the new lock.

...

Possible unsafe locking scenario:

  CPU0CPU1
  
 lock(_desc->init_mutex);
  
lock((work_completion)(&(_desc->dwork)->work));
  lock(_desc->init_mutex);
 lock((work_completion)(&(_desc->dwork)->work));

*** DEADLOCK ***

Since the workqueue manipulation is protected by its own internal locking,
the cancellation of pending work doesn't need to be done under
acpi_desc->init_mutex. Move cancel_delayed_work_sync() outside the
init_mutex to fix the deadlock. Any work that starts after
acpi_nfit_shutdown() drops the lock will see ARS_CANCEL, and the
cancel_delayed_work_sync() will safely flush it out.

Reported-by: Dan Williams 
Signed-off-by: Vishal Verma 
Link: 
https://lore.kernel.org/r/20230112-acpi_nfit_lockdep-v1-1-660be4dd1...@intel.com
Signed-off-by: Dan Williams 
Signed-off-by: Sasha Levin 
---
 drivers/acpi/nfit/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
index ae5f4acf26753..6d4ac934cd499 100644
--- a/drivers/acpi/nfit/core.c
+++ b/drivers/acpi/nfit/core.c
@@ -3297,8 +3297,8 @@ void acpi_nfit_shutdown(void *data)
 
mutex_lock(_desc->init_mutex);
set_bit(ARS_CANCEL, _desc->scrub_flags);
-   cancel_delayed_work_sync(_desc->dwork);
mutex_unlock(_desc->init_mutex);
+   cancel_delayed_work_sync(_desc->dwork);
 
/*
 * Bounce the nvdimm bus lock to make sure any in-flight
-- 
2.39.0




Re: [patch 11/12] gcov: clang: fix clang-11+ build

2021-04-20 Thread Sasha Levin

On Mon, Apr 19, 2021 at 03:12:26PM -0700, Linus Torvalds wrote:

On Mon, Apr 19, 2021 at 2:37 PM Nathan Chancellor  wrote:


This should not have been merged into mainline by itself. It was a fix
for "gcov: use kvmalloc()", which is still in -mm/-next. Merging it
alone has now broken the build:

https://github.com/ClangBuiltLinux/continuous-integration2/runs/2384465683?check_suite_focus=true

Could it please be reverted in mainline [..]


Now reverted in my tree.

Sasha and stable cc'd too, since it was apparently auto-selected there..


I'll drop it from my queue, thanks!

--
Thanks,
Sasha


Re: [PATCH AUTOSEL 5.4 13/14] gcov: clang: fix clang-11+ build

2021-04-20 Thread Sasha Levin

On Tue, Apr 20, 2021 at 09:01:19AM +0200, Johannes Berg wrote:

On Mon, 2021-04-19 at 20:44 +, Sasha Levin wrote:

From: Johannes Berg 

[ Upstream commit 04c53de57cb6435738961dace8b1b71d3ecd3c39 ]

With clang-11+, the code is broken due to my kvmalloc() conversion
(which predated the clang-11 support code) leaving one vmalloc() in
place.  Fix that.


This patch might *apply* on 5.4 (and the other kernels you selected it
for), but then I'm pretty sure it'll be broken, unless you also applied
the various patches this was actually fixing (my kvmalloc conversion,
and the clang-11 support).

Also, Linus has (correctly) reverted this patch from mainline, it
shouldn't have gone there in the first place, probably really should be
squashed into my kvmalloc conversion patch that's in -mm now.

Sorry if I didn't make that clear enough at the time.


In any case, please drop this patch from all stable trees.


Will do, thanks!

--
Thanks,
Sasha


[PATCH AUTOSEL 4.4 7/7] ia64: tools: remove duplicate definition of ia64_mf() on ia64

2021-04-19 Thread Sasha Levin
From: John Paul Adrian Glaubitz 

[ Upstream commit f4bf09dc3aaa4b07cd15630f2023f68cb2668809 ]

The ia64_mf() macro defined in tools/arch/ia64/include/asm/barrier.h is
already defined in  on ia64 which causes libbpf
failing to build:

CC   /usr/src/linux/tools/bpf/bpftool//libbpf/staticobjs/libbpf.o
  In file included from /usr/src/linux/tools/include/asm/barrier.h:24,
   from /usr/src/linux/tools/include/linux/ring_buffer.h:4,
   from libbpf.c:37:
  /usr/src/linux/tools/include/asm/../../arch/ia64/include/asm/barrier.h:43: 
error: "ia64_mf" redefined [-Werror]
 43 | #define ia64_mf()   asm volatile ("mf" ::: "memory")
|
  In file included from /usr/include/ia64-linux-gnu/asm/intrinsics.h:20,
   from /usr/include/ia64-linux-gnu/asm/swab.h:11,
   from /usr/include/linux/swab.h:8,
   from /usr/include/linux/byteorder/little_endian.h:13,
   from /usr/include/ia64-linux-gnu/asm/byteorder.h:5,
   from /usr/src/linux/tools/include/uapi/linux/perf_event.h:20,
   from libbpf.c:36:
  /usr/include/ia64-linux-gnu/asm/gcc_intrin.h:382: note: this is the location 
of the previous definition
382 | #define ia64_mf() __asm__ volatile ("mf" ::: "memory")
|
  cc1: all warnings being treated as errors

Thus, remove the definition from tools/arch/ia64/include/asm/barrier.h.

Signed-off-by: John Paul Adrian Glaubitz 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
Signed-off-by: Sasha Levin 
---
 tools/arch/ia64/include/asm/barrier.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/arch/ia64/include/asm/barrier.h 
b/tools/arch/ia64/include/asm/barrier.h
index e4422b4b634e..94ae4a333a35 100644
--- a/tools/arch/ia64/include/asm/barrier.h
+++ b/tools/arch/ia64/include/asm/barrier.h
@@ -38,9 +38,6 @@
  * sequential memory pages only.
  */
 
-/* XXX From arch/ia64/include/uapi/asm/gcc_intrin.h */
-#define ia64_mf()   asm volatile ("mf" ::: "memory")
-
 #define mb()   ia64_mf()
 #define rmb()  mb()
 #define wmb()  mb()
-- 
2.30.2



[PATCH AUTOSEL 4.4 6/7] ia64: fix discontig.c section mismatches

2021-04-19 Thread Sasha Levin
From: Randy Dunlap 

[ Upstream commit e2af9da4f867a1a54f1252bf3abc1a5c63951778 ]

Fix IA64 discontig.c Section mismatch warnings.

When CONFIG_SPARSEMEM=y and CONFIG_MEMORY_HOTPLUG=y, the functions
computer_pernodesize() and scatter_node_data() should not be marked as
__meminit because they are needed after init, on any memory hotplug
event.  Also, early_nr_cpus_node() is called by compute_pernodesize(),
so early_nr_cpus_node() cannot be __meminit either.

  WARNING: modpost: vmlinux.o(.text.unlikely+0x1612): Section mismatch in 
reference from the function arch_alloc_nodedata() to the function 
.meminit.text:compute_pernodesize()
  The function arch_alloc_nodedata() references the function __meminit 
compute_pernodesize().
  This is often because arch_alloc_nodedata lacks a __meminit annotation or the 
annotation of compute_pernodesize is wrong.

  WARNING: modpost: vmlinux.o(.text.unlikely+0x1692): Section mismatch in 
reference from the function arch_refresh_nodedata() to the function 
.meminit.text:scatter_node_data()
  The function arch_refresh_nodedata() references the function __meminit 
scatter_node_data().
  This is often because arch_refresh_nodedata lacks a __meminit annotation or 
the annotation of scatter_node_data is wrong.

  WARNING: modpost: vmlinux.o(.text.unlikely+0x1502): Section mismatch in 
reference from the function compute_pernodesize() to the function 
.meminit.text:early_nr_cpus_node()
  The function compute_pernodesize() references the function __meminit 
early_nr_cpus_node().
  This is often because compute_pernodesize lacks a __meminit annotation or the 
annotation of early_nr_cpus_node is wrong.

Link: https://lkml.kernel.org/r/20210411001201.3069-1-rdun...@infradead.org
Signed-off-by: Randy Dunlap 
Cc: Mike Rapoport 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
Signed-off-by: Sasha Levin 
---
 arch/ia64/mm/discontig.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index 878626805369..3b0c892953ab 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -99,7 +99,7 @@ static int __init build_node_maps(unsigned long start, 
unsigned long len,
  * acpi_boot_init() (which builds the node_to_cpu_mask array) hasn't been
  * called yet.  Note that node 0 will also count all non-existent cpus.
  */
-static int __meminit early_nr_cpus_node(int node)
+static int early_nr_cpus_node(int node)
 {
int cpu, n = 0;
 
@@ -114,7 +114,7 @@ static int __meminit early_nr_cpus_node(int node)
  * compute_pernodesize - compute size of pernode data
  * @node: the node id.
  */
-static unsigned long __meminit compute_pernodesize(int node)
+static unsigned long compute_pernodesize(int node)
 {
unsigned long pernodesize = 0, cpus;
 
@@ -411,7 +411,7 @@ static void __init reserve_pernode_space(void)
}
 }
 
-static void __meminit scatter_node_data(void)
+static void scatter_node_data(void)
 {
pg_data_t **dst;
int node;
-- 
2.30.2



[PATCH AUTOSEL 4.4 5/7] i2c: mv64xxx: Fix random system lock caused by runtime PM

2021-04-19 Thread Sasha Levin
From: Marek BehĂșn 

[ Upstream commit 39930213e7779b9c4257499972b8afb8858f1a2d ]

I noticed a weird bug with this driver on Marvell CN9130 Customer
Reference Board.

Sometime after boot, the system locks with the following message:
 [104.071363] i2c i2c-0: mv64xxx: I2C bus locked, block: 1, time_left: 0

The system does not respond afterwards, only warns about RCU stalls.

This first appeared with commit e5c02cf54154 ("i2c: mv64xxx: Add runtime
PM support").

With further experimentation I discovered that adding a delay into
mv64xxx_i2c_hw_init() fixes this issue. This function is called before
every xfer, due to how runtime PM works in this driver. It seems that in
order to work correctly, a delay is needed after the bus is reset in
this function.

Since there already is a known erratum with this controller needing a
delay, I assume that this is just another place this needs to be
applied. Therefore I apply the delay only if errata_delay is true.

Signed-off-by: Marek BehĂșn 
Acked-by: Gregory CLEMENT 
Reviewed-by: Samuel Holland 
Signed-off-by: Wolfram Sang 
Signed-off-by: Sasha Levin 
---
 drivers/i2c/busses/i2c-mv64xxx.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c
index 332d32c53c41..73324f047932 100644
--- a/drivers/i2c/busses/i2c-mv64xxx.c
+++ b/drivers/i2c/busses/i2c-mv64xxx.c
@@ -219,6 +219,10 @@ mv64xxx_i2c_hw_init(struct mv64xxx_i2c_data *drv_data)
writel(0, drv_data->reg_base + drv_data->reg_offsets.ext_addr);
writel(MV64XXX_I2C_REG_CONTROL_TWSIEN | MV64XXX_I2C_REG_CONTROL_STOP,
drv_data->reg_base + drv_data->reg_offsets.control);
+
+   if (drv_data->errata_delay)
+   udelay(5);
+
drv_data->state = MV64XXX_I2C_STATE_IDLE;
 }
 
-- 
2.30.2



[PATCH AUTOSEL 4.4 4/7] cavium/liquidio: Fix duplicate argument

2021-04-19 Thread Sasha Levin
From: Wan Jiabing 

[ Upstream commit 416dcc5ce9d2a810477171c62ffa061a98f87367 ]

Fix the following coccicheck warning:

./drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h:413:6-28:
duplicated argument to & or |

The CN6XXX_INTR_M1UPB0_ERR here is duplicate.
Here should be CN6XXX_INTR_M1UNB0_ERR.

Signed-off-by: Wan Jiabing 
Signed-off-by: David S. Miller 
Signed-off-by: Sasha Levin 
---
 drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h 
b/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
index 5e3aff242ad3..3ab84d18ad3a 100644
--- a/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
+++ b/drivers/net/ethernet/cavium/liquidio/cn66xx_regs.h
@@ -417,7 +417,7 @@
   | CN6XXX_INTR_M0UNWI_ERR \
   | CN6XXX_INTR_M1UPB0_ERR \
   | CN6XXX_INTR_M1UPWI_ERR \
-  | CN6XXX_INTR_M1UPB0_ERR \
+  | CN6XXX_INTR_M1UNB0_ERR \
   | CN6XXX_INTR_M1UNWI_ERR \
   | CN6XXX_INTR_INSTR_DB_OF_ERR\
   | CN6XXX_INTR_SLIST_DB_OF_ERR\
-- 
2.30.2



[PATCH AUTOSEL 4.4 3/7] xen-netback: Check for hotplug-status existence before watching

2021-04-19 Thread Sasha Levin
From: Michael Brown 

[ Upstream commit 2afeec08ab5c86ae21952151f726bfe184f6b23d ]

The logic in connect() is currently written with the assumption that
xenbus_watch_pathfmt() will return an error for a node that does not
exist.  This assumption is incorrect: xenstore does allow a watch to
be registered for a nonexistent node (and will send notifications
should the node be subsequently created).

As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it
has served its purpose"), this leads to a failure when a domU
transitions into XenbusStateConnected more than once.  On the first
domU transition into Connected state, the "hotplug-status" node will
be deleted by the hotplug_status_changed() callback in dom0.  On the
second or subsequent domU transition into Connected state, the
hotplug_status_changed() callback will therefore never be invoked, and
so the backend will remain stuck in InitWait.

This failure prevents scenarios such as reloading the xen-netfront
module within a domU, or booting a domU via iPXE.  There is
unfortunately no way for the domU to work around this dom0 bug.

Fix by explicitly checking for existence of the "hotplug-status" node,
thereby creating the behaviour that was previously assumed to exist.

Signed-off-by: Michael Brown 
Reviewed-by: Paul Durrant 
Signed-off-by: David S. Miller 
Signed-off-by: Sasha Levin 
---
 drivers/net/xen-netback/xenbus.c | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 21c8e2720b40..683fd8560f2b 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -849,11 +849,15 @@ static void connect(struct backend_info *be)
xenvif_carrier_on(be->vif);
 
unregister_hotplug_status_watch(be);
-   err = xenbus_watch_pathfmt(dev, >hotplug_status_watch, NULL,
-  hotplug_status_changed,
-  "%s/%s", dev->nodename, "hotplug-status");
-   if (!err)
+   if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
+   err = xenbus_watch_pathfmt(dev, >hotplug_status_watch,
+  NULL, hotplug_status_changed,
+  "%s/%s", dev->nodename,
+  "hotplug-status");
+   if (err)
+   goto err;
be->have_hotplug_status_watch = 1;
+   }
 
netif_tx_wake_all_queues(be->vif->dev);
 
-- 
2.30.2



[PATCH AUTOSEL 4.4 1/7] ARM: dts: Fix swapped mmc order for omap3

2021-04-19 Thread Sasha Levin
From: Tony Lindgren 

[ Upstream commit a1ebdb3741993f853865d1bd8f77881916ad53a7 ]

Also some omap3 devices like n900 seem to have eMMC and micro-sd swapped
around with commit 21b2cec61c04 ("mmc: Set PROBE_PREFER_ASYNCHRONOUS for
drivers that existed in v4.4").

Let's fix the issue with aliases as discussed on the mailing lists. While
the mmc aliases should be board specific, let's first fix the issue with
minimal changes.

Cc: Aaro Koskinen 
Cc: Peter Ujfalusi 
Signed-off-by: Tony Lindgren 
Signed-off-by: Sasha Levin 
---
 arch/arm/boot/dts/omap3.dtsi | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm/boot/dts/omap3.dtsi b/arch/arm/boot/dts/omap3.dtsi
index 8a2b25332b8c..a2e41d79e829 100644
--- a/arch/arm/boot/dts/omap3.dtsi
+++ b/arch/arm/boot/dts/omap3.dtsi
@@ -22,6 +22,9 @@ aliases {
i2c0 = 
i2c1 = 
i2c2 = 
+   mmc0 = 
+   mmc1 = 
+   mmc2 = 
serial0 = 
serial1 = 
serial2 = 
-- 
2.30.2



[PATCH AUTOSEL 4.4 2/7] s390/entry: save the caller of psw_idle

2021-04-19 Thread Sasha Levin
From: Vasily Gorbik 

[ Upstream commit a994eddb947ea9ebb7b14d9a1267001699f0a136 ]

Currently psw_idle does not allocate a stack frame and does not
save its r14 and r15 into the save area. Even though this is valid from
call ABI point of view, because psw_idle does not make any calls
explicitly, in reality psw_idle is an entry point for controlled
transition into serving interrupts. So, in practice, psw_idle stack
frame is analyzed during stack unwinding. Depending on build options
that r14 slot in the save area of psw_idle might either contain a value
saved by previous sibling call or complete garbage.

  [task03803c28] do_ext_irq+0xd6/0x160
  [task03803c78] ext_int_handler+0xba/0xe8
  [task   *03803dd8] psw_idle_exit+0x0/0x8 <-- pt_regs
 ([task03803dd8] 0x0)
  [task03803e10] default_idle_call+0x42/0x148
  [task03803e30] do_idle+0xce/0x160
  [task03803e70] cpu_startup_entry+0x36/0x40
  [task03803ea0] arch_call_rest_init+0x76/0x80

So, to make a stacktrace nicer and actually point for the real caller of
psw_idle in this frequently occurring case, make psw_idle save its r14.

  [task03803c28] do_ext_irq+0xd6/0x160
  [task03803c78] ext_int_handler+0xba/0xe8
  [task   *03803dd8] psw_idle_exit+0x0/0x6 <-- pt_regs
 ([task03803dd8] arch_cpu_idle+0x3c/0xd0)
  [task03803e10] default_idle_call+0x42/0x148
  [task03803e30] do_idle+0xce/0x160
  [task03803e70] cpu_startup_entry+0x36/0x40
  [task03803ea0] arch_call_rest_init+0x76/0x80

Reviewed-by: Sven Schnelle 
Signed-off-by: Vasily Gorbik 
Signed-off-by: Heiko Carstens 
Signed-off-by: Sasha Levin 
---
 arch/s390/kernel/entry.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
index 4cad1adff16b..d43f18b3d42c 100644
--- a/arch/s390/kernel/entry.S
+++ b/arch/s390/kernel/entry.S
@@ -889,6 +889,7 @@ ENTRY(ext_int_handler)
  * Load idle PSW. The second "half" of this function is in .Lcleanup_idle.
  */
 ENTRY(psw_idle)
+   stg %r14,(__SF_GPRS+8*8)(%r15)
stg %r3,__SF_EMPTY(%r15)
larl%r1,.Lpsw_idle_lpsw+4
stg %r1,__SF_EMPTY+8(%r15)
-- 
2.30.2



[PATCH AUTOSEL 4.9 8/8] ia64: tools: remove duplicate definition of ia64_mf() on ia64

2021-04-19 Thread Sasha Levin
From: John Paul Adrian Glaubitz 

[ Upstream commit f4bf09dc3aaa4b07cd15630f2023f68cb2668809 ]

The ia64_mf() macro defined in tools/arch/ia64/include/asm/barrier.h is
already defined in  on ia64 which causes libbpf
failing to build:

CC   /usr/src/linux/tools/bpf/bpftool//libbpf/staticobjs/libbpf.o
  In file included from /usr/src/linux/tools/include/asm/barrier.h:24,
   from /usr/src/linux/tools/include/linux/ring_buffer.h:4,
   from libbpf.c:37:
  /usr/src/linux/tools/include/asm/../../arch/ia64/include/asm/barrier.h:43: 
error: "ia64_mf" redefined [-Werror]
 43 | #define ia64_mf()   asm volatile ("mf" ::: "memory")
|
  In file included from /usr/include/ia64-linux-gnu/asm/intrinsics.h:20,
   from /usr/include/ia64-linux-gnu/asm/swab.h:11,
   from /usr/include/linux/swab.h:8,
   from /usr/include/linux/byteorder/little_endian.h:13,
   from /usr/include/ia64-linux-gnu/asm/byteorder.h:5,
   from /usr/src/linux/tools/include/uapi/linux/perf_event.h:20,
   from libbpf.c:36:
  /usr/include/ia64-linux-gnu/asm/gcc_intrin.h:382: note: this is the location 
of the previous definition
382 | #define ia64_mf() __asm__ volatile ("mf" ::: "memory")
|
  cc1: all warnings being treated as errors

Thus, remove the definition from tools/arch/ia64/include/asm/barrier.h.

Signed-off-by: John Paul Adrian Glaubitz 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
Signed-off-by: Sasha Levin 
---
 tools/arch/ia64/include/asm/barrier.h | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/arch/ia64/include/asm/barrier.h 
b/tools/arch/ia64/include/asm/barrier.h
index e4422b4b634e..94ae4a333a35 100644
--- a/tools/arch/ia64/include/asm/barrier.h
+++ b/tools/arch/ia64/include/asm/barrier.h
@@ -38,9 +38,6 @@
  * sequential memory pages only.
  */
 
-/* XXX From arch/ia64/include/uapi/asm/gcc_intrin.h */
-#define ia64_mf()   asm volatile ("mf" ::: "memory")
-
 #define mb()   ia64_mf()
 #define rmb()  mb()
 #define wmb()  mb()
-- 
2.30.2



  1   2   3   4   5   6   7   8   9   10   >