There is also a typo in Makefile which causes a modules.livepatch file
to be created in kernel sources even in case of building an external
module.
> diff --git a/Makefile b/Makefile
> index 2fdd8b40b7e0..459b9c9fe0a8 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -1185,6 +1185,7 @@ PHONY +=
Have both __locks_insert_block and the deadlock and conflict checking
functions take a struct file_lock_core pointer instead of a struct
file_lock one. Also, change posix_locks_deadlock to return bool.
Signed-off-by: Jeff Layton
---
fs/locks.c | 134
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/nfsd/filecache.c| 4 +--
fs/nfsd/netns.h| 1 -
fs/nfsd/nfs4callback.c | 2 +-
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/smb/client/cifsglob.h | 1 -
fs/smb/client/cifssmb.c | 9 +++---
fs/smb/client/file.c | 75
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/ocfs2/locks.c | 13 ++---
fs/ocfs2/stack_user.c | 3 +--
2 files changed, 7 insertions(+), 9
Convert these functions to take a file_lock_core instead of a file_lock.
Signed-off-by: Jeff Layton
---
fs/locks.c | 18 +-
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index effe84f954f9..ad4bb9cd4c9d 100644
--- a/fs/locks.c
+++
This patch creates two ".cocci" semantic patches in a top level cocci/
directory. These patches were used to help generate several of the
following patches. We can drop this patch or move the files to a more
appropriate location before merging.
Signed-off-by: Jeff Layton
---
In later patches we're going to introduce some macros with names that
clash with the variable names here. Rename them.
Signed-off-by: Jeff Layton
---
fs/nfsd/nfs4state.c | 24
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/fs/nfsd/nfs4state.c
Rename the old __locks_delete_block to __locks_unlink_lock. Rename
change old locks_delete_block function to __locks_delete_block and
have it take a file_lock_core. Make locks_delete_block a simple wrapper
around __locks_delete_block.
Also, change __locks_insert_block to take struct
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/nfs/delegation.c | 4 ++--
fs/nfs/file.c | 23 +++
fs/nfs/nfs3proc.c | 2 +-
Both locks and leases deal with fl_blocker. Switch the fl_blocker
pointer in struct file_lock_core to point to the file_lock_core of the
blocker instead of a file_lock structure.
Signed-off-by: Jeff Layton
---
fs/locks.c | 16
include/linux/filelock.h
> -Original Message-
> From: Willem de Bruijn [mailto:willemdebruijn.ker...@gmail.com]
> Sent: Thursday, January 25, 2024 3:05 AM
> To: wangyunjian ; m...@redhat.com;
> willemdebruijn.ker...@gmail.com; jasow...@redhat.com; k...@kernel.org;
> da...@davemloft.net; magnus.karls...@intel.com
>
Toke Høiland-Jørgensen writes:
> "Ubisectech Sirius" writes:
>
>>>Hmm, so from eyeballing the code in question, this looks like it is
>>>another initialisation race along the lines of the one fixed in commit:
>>>8b3046abc99e ("ath9k_htc: fix NULL pointer dereference at
>>>
On Fri, Jan 12, 2024 at 07:10:50PM +0900, Masami Hiramatsu (Google) wrote:
> Hi,
>
> Here is the 6th version of the series to re-implement the fprobe on
> function-graph tracer. The previous version is;
>
> https://lore.kernel.org/all/170290509018.220107.1347127510564358608.stgit@devnote2/
>
>
Hi Masami,
Thanks for taking the time to look at those changes.
On Thu, Jan 25, 2024 at 12:11:49AM +0900, Masami Hiramatsu wrote:
> On Tue, 23 Jan 2024 11:07:54 +
> Vincent Donnefort wrote:
>
> [...]
> > @@ -6592,8 +6641,11 @@ int tracing_set_tracer(struct trace_array *tr, const
> > char
In later patches we're going to introduce some macros that will clash
with the variable name here. Rename it.
Signed-off-by: Jeff Layton
---
fs/locks.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index cc7c117ee192..1eceaa56e47f 100644
---
In later patches we're going to introduce some macros with names that
clash with fields here. To prevent problems building, just rename the
fields in the trace entry structures.
Signed-off-by: Jeff Layton
---
include/trace/events/filelock.h | 76 -
1 file
Long ago, file locks used to hang off of a singly-linked list in struct
inode. Because of this, when leases were added, they were added to the
same list and so they had to be tracked using the same sort of
structure.
Several years ago, we added struct file_lock_context, which allowed us
to use
Convert __locks_delete_block and __locks_wake_up_blocks to take a struct
file_lock_core pointer.
While we could do this in another way, we're going to need to add a
file_lock() helper function later anyway, so introduce and use it now.
Signed-off-by: Jeff Layton
---
fs/locks.c | 45
Have locks_insert_global_blocked and locks_delete_global_blocked take a
struct file_lock_core pointer.
Signed-off-by: Jeff Layton
---
fs/locks.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index ad4bb9cd4c9d..d6d47612527c 100644
---
In a future patch, we're going to split file leases into their own
structure. Since a lot of the underlying machinery uses the same fields
move those into a new file_lock_core, and embed that inside struct
file_lock.
For now, add some macros to ensure that we can continue to build while
the
"Ubisectech Sirius" writes:
>>Hmm, so from eyeballing the code in question, this looks like it is
>>another initialisation race along the lines of the one fixed in commit:
>>8b3046abc99e ("ath9k_htc: fix NULL pointer dereference at
>>ath9k_htc_tx_get_packet()")
>>Could you please test the patch
Convert posix_owner_key to take struct file_lock_core pointer, and fix
up the callers to pass one in.
Signed-off-by: Jeff Layton
---
fs/locks.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index bd0cfee230ae..effe84f954f9 100644
---
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/lockd/clnt4xdr.c | 14 +-
fs/lockd/clntlock.c | 2 +-
fs/lockd/clntproc.c | 62
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/dlm/plock.c | 45 ++---
1 file changed, 22 insertions(+), 23
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/gfs2/file.c | 17 -
1 file changed, 8 insertions(+), 9 deletions(-)
diff --git a/fs/gfs2/file.c
Reduce some pointer manipulation by just using file_lock_core where we
can and only translate to a file_lock when needed.
Signed-off-by: Jeff Layton
---
fs/locks.c | 71 +++---
1 file changed, 36 insertions(+), 35 deletions(-)
diff --git
Rework the internals of locks_delete_block to use struct file_lock_core
(mostly just for clarity's sake). The prototype is not changed.
Signed-off-by: Jeff Layton
---
fs/locks.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index
These don't add a lot of value over just open-coding the flag check.
Suggested-by: NeilBrown
Signed-off-by: Jeff Layton
---
fs/locks.c | 32 +++-
1 file changed, 15 insertions(+), 17 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 1eceaa56e47f..87212f86eca9
In later patches we're going to introduce macros that conflict with the
variable name here. Rename it.
Signed-off-by: Jeff Layton
---
fs/afs/flock.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/afs/flock.c b/fs/afs/flock.c
index 9c6dea3139f5..e7feaf66bddf 100644
On Wed, Jan 24, 2024 at 5:08 PM Xuan Zhuo wrote:
>
> On Wed, 24 Jan 2024 16:57:19 +0800, Liang Chen
> wrote:
> > The xdp program may overwrite the inline virtio header. To ensure the
> > integrity of the virtio header, it is saved in a data structure that
> > wraps both the xdp_buff and the
In later patches we're going to introduce some macros with names that
clash with the variable names here. Rename them.
Signed-off-by: Jeff Layton
---
fs/lockd/clntproc.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/fs/lockd/clntproc.c
In later patches, we're going to introduce some macros that conflict
with the variable name here. Rename it.
Signed-off-by: Jeff Layton
---
fs/9p/vfs_file.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index
Have these functions take a file_lock_core pointer instead of a
file_lock.
Signed-off-by: Jeff Layton
---
fs/locks.c | 44 ++--
1 file changed, 22 insertions(+), 22 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 03985cfb7eff..0491d621417d 100644
Have locks_wake_up_blocks take a file_lock_core pointer, and fix up the
callers to pass one in.
Signed-off-by: Jeff Layton
---
fs/locks.c | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 6182f5c5e7b4..03985cfb7eff 100644
---
Have assign_type take struct file_lock_core instead of file_lock.
Signed-off-by: Jeff Layton
---
fs/locks.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 647a778d2c85..6182f5c5e7b4 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -439,13
Convert more internal fs/locks.c functions to take and deal with struct
file_lock_core instead of struct file_lock:
- locks_dump_ctx_list
- locks_check_ctx_file_list
- locks_release_private
- locks_owner_has_blockers
Signed-off-by: Jeff Layton
---
fs/locks.c | 51
Change posix_same_owner to take struct file_lock_core pointers, and
convert the callers to pass those in.
Signed-off-by: Jeff Layton
---
fs/locks.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index a0d6fc0e043a..bd0cfee230ae
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/afs/flock.c | 55 +++---
fs/afs/internal.h | 1 -
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/ceph/locks.c | 75 +
1 file changed, 38 insertions(+),
On Sun, 17 Dec 2023, Karel Balej wrote:
> From: Karel Balej
>
> Marvell 88PM880 and 8PM886 are two similar PMICs with mostly matching
> register mapping. They provide various functions such as onkey, battery,
> charger and regulators.
>
> Add support for 88PM886 found for instance in the
Kalle Valo writes:
> Toke Høiland-Jørgensen writes:
>
>> "Ubisectech Sirius" writes:
>>
Hmm, so from eyeballing the code in question, this looks like it is
another initialisation race along the lines of the one fixed in commit:
8b3046abc99e ("ath9k_htc: fix NULL pointer dereference
On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> Long ago, file locks used to hang off of a singly-linked list in struct
> inode. Because of this, when leases were added, they were added to the
> same list and so they had to be tracked using the same sort of
> structure.
>
> Several
In later patches we're going to introduce some temporary macros with
names that clash with the variable name here. Rename it.
Signed-off-by: Jeff Layton
---
fs/dlm/plock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index
In later patches we're going to introduce some temporary macros with
names that clash with the variable name here. Rename it.
Signed-off-by: Jeff Layton
---
fs/nfs/nfs4proc.c | 10 +-
fs/nfs/nfs4state.c | 16
2 files changed, 13 insertions(+), 13 deletions(-)
diff
Signed-off-by: Jeff Layton
---
fs/locks.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/fs/locks.c b/fs/locks.c
index 0491d621417d..e8afdd084245 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2169,17 +2169,17 @@ EXPORT_SYMBOL_GPL(vfs_test_lock);
*
*
The RSS hash report is a feature that's part of the virtio specification.
Currently, virtio backends like qemu, vdpa (mlx5), and potentially vhost
(still a work in progress as per [1]) support this feature. While the
capability to obtain the RSS hash has been enabled in the normal path,
it's
Convert fs/locks.c to access fl_core fields direcly rather than using
the backward-compatability macros. Most of this was done with
coccinelle, with a few by-hand fixups.
Signed-off-by: Jeff Layton
---
fs/locks.c | 479
Convert some internal fs/locks.c function to take and deal with struct
file_lock_core instead of struct file_lock:
- locks_init_lock_heads
- locks_alloc_lock
- locks_init_lock
Signed-off-by: Jeff Layton
---
fs/locks.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff
Add a new struct file_lease and move the lease-specific fields from
struct file_lock to it. Convert the appropriate API calls to take
struct file_lease instead, and convert the callers to use them.
There is zero overlap between the lock manager operations for file
locks and the ones for file
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/smb/server/smb2pdu.c | 45 ++---
fs/smb/server/vfs.c | 15
Everything has been converted to access fl_core fields directly, so we
can now drop these.
Signed-off-by: Jeff Layton
---
include/linux/filelock.h | 16
1 file changed, 16 deletions(-)
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.
Signed-off-by: Jeff Layton
---
fs/9p/vfs_file.c | 39 +++
1 file changed, 19 insertions(+), 20 deletions(-)
> -Original Message-
> From: Jason Wang [mailto:jasow...@redhat.com]
> Sent: Thursday, January 25, 2024 12:49 PM
> To: wangyunjian
> Cc: m...@redhat.com; willemdebruijn.ker...@gmail.com; k...@kernel.org;
> da...@davemloft.net; magnus.karls...@intel.com; net...@vger.kernel.org;
>
Hi Vincent,
On Thu, 25 Jan 2024 14:53:40 +
Vincent Donnefort wrote:
> > > @@ -1470,12 +1483,20 @@ register_snapshot_trigger(char *glob,
> > > struct event_trigger_data *data,
> > > struct trace_event_file *file)
> > > {
> > > - if
Alongside the base address, arm64 will also need to know the size of a
tag storage region. Teach of_flat_dt_translate_address() to parse and
return the size.
Signed-off-by: Alexandru Elisei
---
Changes since rfc v2:
* New patch, suggested by Rob Herring.
arch/sh/kernel/cpu/sh2/probe.c | 2
According to ARM DDI 0487J.a, page D10-5976, a memory location which
doesn't have the Normal memory attribute is considered Untagged, and
accesses are Tag Unchecked. Tag reads from an Untagged address return
0b, and writes are ignored.
Linux uses VM_PFNMAP VMAs represent device memory, and
Add the function of_flat_read_u32() to return the value of a property as
an u32.
Signed-off-by: Alexandru Elisei
---
Changes since rfc v2:
* New patch, suggested by Rob Herring.
drivers/of/fdt.c | 21 +
include/linux/of_fdt.h | 2 ++
2 files changed, 23
Faking a tag storage region for FVP is useful for testing.
Signed-off-by: Alexandru Elisei
---
Changes since rfc v2:
* New patch, not intended to be merged.
arch/arm64/boot/dts/arm/fvp-base-revc.dts | 42 +--
1 file changed, 39 insertions(+), 3 deletions(-)
diff --git
Everything is in place, enable tag storage management.
Signed-off-by: Alexandru Elisei
---
arch/arm64/Kconfig | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 088e30fc6d12..95c153705a2c 100644
--- a/arch/arm64/Kconfig
+++
From: Masami Hiramatsu (Google)
Fix register_snapshot_trigger() to return error code if it failed to
allocate a snapshot instead of 0 (success). Unless that, it will register
snapshot trigger without an error.
Fixes: 0bbe7f719985 ("tracing: Fix the race between registering 'snapshot'
event
On Wed, 24 Jan 2024 12:03:45 -0800 Vishal Verma
wrote:
> This series adds sysfs ABI to control memmap_on_memory behavior for DAX
> devices.
Thanks. I'll add this to mm-unstable for some additional testing, but
I do think we should have the evidence of additional review on this
series's four
Hi Vincent,
kernel test robot noticed the following build errors:
[auto build test ERROR on 4f1991a92cfe89096b2d1f5583a2e093bdd55c37]
url:
https://github.com/intel-lab-lkp/linux/commits/Vincent-Donnefort/ring-buffer-Zero-ring-buffer-sub-buffers/20240123-191131
base:
Hello:
This patch was applied to netdev/net-next.git (main)
by Jakub Kicinski :
On Wed, 24 Jan 2024 22:32:55 +0300 you wrote:
> SOCK_SEQPACKET is supported for virtio transport, so do not interpret
> such type of socket as unknown.
>
> Signed-off-by: Arseniy Krasnov
> ---
>
>Great, thank you for testing! I'll send a proper patch. How would you
>like to be credited with reporting? Just as 'Ubisectech Sirius
>' ?
Hello.
Please use 'Ubisectech Sirius' to credit the
report. Thanks.
Hi Vincent,
kernel test robot noticed the following build errors:
[auto build test ERROR on 4f1991a92cfe89096b2d1f5583a2e093bdd55c37]
url:
https://github.com/intel-lab-lkp/linux/commits/Vincent-Donnefort/ring-buffer-Zero-ring-buffer-sub-buffers/20240123-191131
base:
On 1/25/24 22:56, Luca Weiss wrote:
From: Vladimir Lypak
Add the GPU node for the Adreno 506 found on this family of SoCs. The
clock speeds are a bit different per SoC variant, SDM450 maxes out at
600MHz while MSM8953 (= SDM625) goes up to 650MHz and SDM632 goes up to
725MHz.
To achieve
On 1/25/24 23:24, Dmitry Baryshkov wrote:
On 25/01/2024 23:56, Luca Weiss wrote:
From: Vladimir Lypak
Add the IOMMU used for the GPU on MSM8953.
Signed-off-by: Vladimir Lypak
---
arch/arm64/boot/dts/qcom/msm8953.dtsi | 31 +++
1 file changed, 31
On Fri, 2024-01-26 at 09:34 +1100, NeilBrown wrote:
> On Fri, 26 Jan 2024, Chuck Lever wrote:
> > On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> > > Long ago, file locks used to hang off of a singly-linked list in struct
> > > inode. Because of this, when leases were added, they
On 1/25/2024 3:03 PM, Sohil Mehta wrote:
> On 1/25/2024 10:48 AM, Avadhut Naik wrote:
>> Currently, the microcode field (Microcode Revision) of struct mce is not
>> exported to userspace through the mce_record tracepoint.
>>
>> Export it through the tracepoint as it may provide useful
cma->name is displayed in several CMA messages. When the name is generated
by the CMA code, don't append a newline to avoid breaking the text across
two lines.
Signed-off-by: Alexandru Elisei
---
Changes since rfc v2:
* New patch. This is a fix, and can be merged independently of the other
As an architecture might have specific requirements around the allocation
of CMA pages, add an arch hook that can disable allocations from
MIGRATE_CMA, if the allocation was otherwise allowed.
This will be used by arm64, which will put tag storage pages on the
MIGRATE_CMA list, and tag storage
The patch f945116e4e19 ("mm: page_alloc: remove stale CMA guard code")
removed the CMA filter when allocating from the MIGRATE_MOVABLE pcp list
because CMA is always allowed when __GFP_MOVABLE is set.
With the introduction of the arch_alloc_cma() function, the above is not
true anymore, so bring
Reserve tag storage for a page that is being allocated as tagged. This
is a best effort approach, and failing to reserve tag storage is
allowed.
When all the associated tagged pages have been freed, return the tag
storage pages back to the page allocator, where they can be used again for
data
Before enabling MTE tag storage management, make sure that the CMA areas
have been successfully activated. If a CMA area fails activation, the pages
are kept as reserved. Reserved pages are never used by the page allocator.
If this happens, the kernel would have to manage tag storage only for
Make sure the contents of the tag storage block is not corrupted by
performing:
1. A tag dcache inval when the associated tagged pages are freed, to avoid
dirty tag cache lines being evicted and corrupting the tag storage
block when it's being used to store data.
2. A data cache inval when
To be able to reserve the tag storage associated with a tagged page
requires that the tag storage can be migrated, if it's in use for data.
The kernel allocates pages in non-preemptible contexts, which makes
migration impossible. The only user of tagged pages in the kernel is HW
KASAN, so don't
Tag storage pages mapped by the host in a VM with MTE enabled are migrated
when they are first accessed by the guest. This introduces latency spikes
for memory accesses made by the guest.
Tag storage pages can be mapped in the guest memory when the VM_MTE VMA
flag is not set. Introduce a new VMA
KVM allows MTE enabled VMs to be created when the backing VMA does not have
MTE enabled. As a result, pages allocated for the virtual machine's memory
won't have tag storage reserved. Try to reserve tag storage the first time
the page is accessed by the guest. This is similar to how pages mapped
copy_user_highpage() will do memory allocation if there are saved tags for
the destination page, and the page is missing tag storage.
After commit a349d72fd9ef ("mm/pgtable: add rcu_read_lock() and
rcu_read_unlock()s"), collapse_huge_page() calls
__collapse_huge_page_copy() -> .. ->
On Thu, Jan 25, 2024 at 09:59:03AM +0900, Masami Hiramatsu wrote:
> On Tue, 23 Jan 2024 22:08:41 +
> Beau Belgrave wrote:
>
> > The current code for finding and deleting events assumes that there will
> > never be cases when user_events are registered with the same name, but
> > different
On 1/21/24 11:09, Luca Weiss wrote:
This device has a vibrator attached to the CAMSS_GP0_CLK, use clk-pwm
and pwm-vibrator to make the vibrator work.
Signed-off-by: Luca Weiss
---
now your mainlined smartwatch can wake you up!
Reviewed-by: Konrad Dybcio
Konrad
Extend the usefulness of arch_alloc_page() by adding the gfp_flags
parameter.
Signed-off-by: Alexandru Elisei
---
Changes since rfc v2:
* New patch.
arch/s390/include/asm/page.h | 2 +-
arch/s390/mm/page-states.c | 2 +-
include/linux/gfp.h | 2 +-
mm/page_alloc.c | 2
The arm64 MTE code uses the PG_arch_2 page flag, which it renames to
PG_mte_tagged, to track if a page has been mapped with tagging enabled.
That flag is cleared by free_pages_prepare() by doing:
page->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
When tag storage management is added, tag storage
Introduce a mechanism that allows an architecture to trigger a page fault,
and add the infrastructure to handle that fault accordingly. To use make
use of this, an arch is expected to mark the table entry as PAGE_NONE (which
will cause a fault next time it is accessed) and to implement an
arm64 uses VM_HIGH_ARCH_0 and VM_HIGH_ARCH_1 for enabling MTE for a VMA.
When VM_HIGH_ARCH_0, which arm64 renames to VM_MTE, is set for a VMA, and
the gfp flag __GFP_ZERO is present, the __GFP_ZEROTAGS gfp flag also gets
set in vma_alloc_zeroed_movable_folio().
Expand this to be more generic by
arm64 uses arch_swap_restore() to restore saved tags before the page is
swapped in and it's called in atomic context (with the ptl lock held).
Introduce arch_swap_prepare_to_restore() that will allow an architecture to
perform extra work during swap in and outside of a critical section.
This will
On Thu, Jan 25, 2024 at 12:48:55PM -0600, Avadhut Naik wrote:
> This patchset updates the mce_record tracepoint so that the recently added
> fields of struct mce are exported through it to userspace.
>
> The first patch adds PPIN (Protected Processor Inventory Number) field to
> the tracepoint.
>
The tag save/restore/copy functions could be more explicit about from where
the tags are coming from and where they are being copied to. Renaming the
functions to make it easier to understand what they are doing:
- Rename the mte_clear_page_tags() 'addr' parameter to 'page_addr', to
match the
__GFP_ZEROTAGS is used to instruct the page allocator to zero the tags at
the same time as the physical frame is zeroed. The name can be slightly
misleading, because it doesn't mean that the code will zero the tags
unconditionally, but that the tags will be zeroed if and only if the
physical frame
Hi Arnaud,
On Thu, Jan 18, 2024 at 11:04:30AM +0100, Arnaud Pouliquen wrote:
> From: Arnaud Pouliquen
>
> Add a remoteproc TEE (Trusted Execution Environment) device
Device or driver? Seems to be the latter...
> that will be probed by the TEE bus. If the associated Trusted
> application is
On Wed, Jan 24, 2024 at 09:09:08AM -0500, Steven Rostedt wrote:
> I don't think that's a worry anymore. The offsets can change based on
> kernel config. PowerTop needed to have the library ported to it because
> it use to hardcode the offsets but then it broke when running the 32bit
> version on a
On Fri, Jan 12, 2024 at 07:17:06PM +0900, Masami Hiramatsu (Google) wrote:
SNIP
> * Register @fp to ftrace for enabling the probe on the address given by
> @addrs.
> @@ -298,23 +547,27 @@ EXPORT_SYMBOL_GPL(register_fprobe);
> */
> int register_fprobe_ips(struct fprobe *fp, unsigned long
On 1/24/24 16:31, Luca Weiss wrote:
Add the definitions for the various thermal zones found on the SM6350
SoC. Hooking up GPU and CPU cooling can limit the clock speeds there to
reduce the temperature again to good levels.
Most thermal zones only have one critical temperature configured at
Today, cma_alloc() is used to allocate a contiguous memory region. The
function allows the caller to specify the number of pages to allocate, but
not the starting address. cma_alloc() will walk over the entire CMA region
trying to allocate the first available range of the specified size.
The CMA_ALLOC_SUCCESS, respectively CMA_ALLOC_FAIL, are increased by one
after each cma_alloc() function call. This is done even though cma_alloc()
can allocate an arbitrary number of CMA pages. When looking at
/proc/vmstat, the number of successful (or failed) cma_alloc() calls
doesn't tell much
Similar to the two events that relate to CMA allocations, add the
CMA_RELEASE_SUCCESS and CMA_RELEASE_FAIL events that count when CMA pages
are freed.
Signed-off-by: Alexandru Elisei
---
Changes since rfc v2:
* New patch.
include/linux/vm_event_item.h | 2 ++
mm/cma.c |
There are three situations in which a page that is to be mapped as
tagged doesn't have the corresponding tag storage reserved:
* reserve_tag_storage() failed.
* The allocation didn't specifiy __GFP_TAGGED (this can happen during
migration, for example).
* The page was mapped in a non-MTE
On arm64, when a page is mapped as tagged, its tags are zeroed for two
reasons:
* To prevent leakage of tags to userspace.
* To allow userspace to access the contents of the page with having to set
the tags explicitely (bits 59:56 of an userspace pointer are zero, which
correspond to tag
Tag stoarge pages cannot be tagged. When such a page is mapped in a
MTE-enabled VMA, migrate it out directly and don't try to reserve tag
storage for it.
Signed-off-by: Alexandru Elisei
---
arch/arm64/include/asm/mte_tag_storage.h | 1 +
arch/arm64/kernel/mte_tag_storage.c | 15
1 - 100 of 146 matches
Mail list logo