Re: [PATCH v3 25/47] filelock: convert __locks_insert_block, conflict and deadlock checks to use file_lock_core

2024-02-18 Thread Jeff Layton
On Wed, 2024-01-31 at 18:02 -0500, Jeff Layton wrote:
> Have both __locks_insert_block and the deadlock and conflict checking
> functions take a struct file_lock_core pointer instead of a struct
> file_lock one. Also, change posix_locks_deadlock to return bool.
> 
> Signed-off-by: Jeff Layton 
> ---
>  fs/locks.c | 132 
> +
>  1 file changed, 72 insertions(+), 60 deletions(-)
> 
> diff --git a/fs/locks.c b/fs/locks.c
> index 1e8b943bd7f9..0dc1c9da858c 100644
> --- a/fs/locks.c
> +++ b/fs/locks.c
> @@ -757,39 +757,41 @@ EXPORT_SYMBOL(locks_delete_block);
>   * waiters, and add beneath any waiter that blocks the new waiter.
>   * Thus wakeups don't happen until needed.
>   */
> -static void __locks_insert_block(struct file_lock *blocker,
> -  struct file_lock *waiter,
> -  bool conflict(struct file_lock *,
> -struct file_lock *))
> +static void __locks_insert_block(struct file_lock *blocker_fl,
> +  struct file_lock *waiter_fl,
> +  bool conflict(struct file_lock_core *,
> +struct file_lock_core *))
>  {
> - struct file_lock *fl;
> - BUG_ON(!list_empty(>c.flc_blocked_member));
> + struct file_lock_core *blocker = _fl->c;
> + struct file_lock_core *waiter = _fl->c;
> + struct file_lock_core *flc;
>  
> + BUG_ON(!list_empty(>flc_blocked_member));
>  new_blocker:
> - list_for_each_entry(fl, >c.flc_blocked_requests,
> - c.flc_blocked_member)
> - if (conflict(fl, waiter)) {
> - blocker =  fl;
> + list_for_each_entry(flc, >flc_blocked_requests, 
> flc_blocked_member)
> + if (conflict(flc, waiter)) {
> + blocker =  flc;
>   goto new_blocker;
>   }
> - waiter->c.flc_blocker = blocker;
> - list_add_tail(>c.flc_blocked_member,
> -   >c.flc_blocked_requests);
> - if ((blocker->c.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
> - locks_insert_global_blocked(>c);
> + waiter->flc_blocker = file_lock(blocker);
> + list_add_tail(>flc_blocked_member,
> +   >flc_blocked_requests);
>  
> - /* The requests in waiter->fl_blocked are known to conflict with
> + if ((blocker->flc_flags & (FL_POSIX|FL_OFDLCK)) == (FL_POSIX|FL_OFDLCK))

Christian,

There is a bug in the above delta. That should read:

if ((blocker->flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)

I suspect that is the cause of the performance regression noted by the
KTR.

I believe the bug is fairly harmless -- it's just putting OFD locks into
the global hash when it doesn't need to, which probably slows down
deadlock checking. I'm going to spin up a patch and test it today, but I
wanted to give you a heads up.

I'll send the patch later today or tomorrow.
 
> + locks_insert_global_blocked(waiter);
> +
> + /* The requests in waiter->flc_blocked are known to conflict with
>* waiter, but might not conflict with blocker, or the requests
>* and lock which block it.  So they all need to be woken.
>*/
> - __locks_wake_up_blocks(>c);
> + __locks_wake_up_blocks(waiter);
>  }
>  
>  /* Must be called with flc_lock held. */
>  static void locks_insert_block(struct file_lock *blocker,
>  struct file_lock *waiter,
> -bool conflict(struct file_lock *,
> -  struct file_lock *))
> +bool conflict(struct file_lock_core *,
> +  struct file_lock_core *))
>  {
>   spin_lock(_lock_lock);
>   __locks_insert_block(blocker, waiter, conflict);
> @@ -846,12 +848,12 @@ locks_delete_lock_ctx(struct file_lock *fl, struct 
> list_head *dispose)
>  /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
>   * checks for shared/exclusive status of overlapping locks.
>   */
> -static bool locks_conflict(struct file_lock *caller_fl,
> -struct file_lock *sys_fl)
> +static bool locks_conflict(struct file_lock_core *caller_flc,
> +struct file_lock_core *sys_flc)
>  {
> - if (lock_is_write(sys_fl))
> + if (sys_flc->flc_type == F_WRLCK)
>   return true;
> - if (lock_is_write(caller_fl))
> + if (caller_flc->flc_type == F_WRLCK)
>   return true;
>   return

Re: [PATCH v3 04/47] filelock: add some new helper functions

2024-02-05 Thread Jeff Layton
On Mon, 2024-02-05 at 12:57 +0100, Christian Brauner wrote:
> On Mon, Feb 05, 2024 at 06:55:44AM -0500, Jeff Layton wrote:
> > On Mon, 2024-02-05 at 12:36 +0100, Christian Brauner wrote:
> > > > diff --git a/include/linux/filelock.h b/include/linux/filelock.h
> > > > index 085ff6ba0653..a814664b1053 100644
> > > > --- a/include/linux/filelock.h
> > > > +++ b/include/linux/filelock.h
> > > > @@ -147,6 +147,29 @@ int fcntl_setlk64(unsigned int, struct file *, 
> > > > unsigned int,
> > > >  int fcntl_setlease(unsigned int fd, struct file *filp, int arg);
> > > >  int fcntl_getlease(struct file *filp);
> > > >  
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > +static inline bool lock_is_unlock(struct file_lock *fl)
> > > > +{
> > > > +   return fl->fl_type == F_UNLCK;
> > > > +}
> > > > +
> > > > +static inline bool lock_is_read(struct file_lock *fl)
> > > > +{
> > > > +   return fl->fl_type == F_RDLCK;
> > > > +}
> > > > +
> > > > +static inline bool lock_is_write(struct file_lock *fl)
> > > > +{
> > > > +   return fl->fl_type == F_WRLCK;
> > > > +}
> > > > +
> > > > +static inline void locks_wake_up(struct file_lock *fl)
> > > > +{
> > > > +   wake_up(>fl_wait);
> > > > +}
> > > > +
> > > > +/* for walking lists of file_locks linked by fl_list */
> > > > +#define for_each_file_lock(_fl, _head) list_for_each_entry(_fl, _head, 
> > > > fl_list)
> > > > +
> > > 
> > > This causes a build warning for fs/ceph/ and fs/afs when
> > > !CONFIG_FILE_LOCKING. I'm about to fold the following diff into this
> > > patch. The diff looks a bit wonky but essentially I've moved
> > > lock_is_unlock(), lock_is_{read,write}(), locks_wake_up() and
> > > for_each_file_lock() out of the ifdef CONFIG_FILE_LOCKING:
> > > 
> > 
> > I sent a patch for this problem yesterday. Did you not get it?
> 
> Whoops, probably missed it on the trip back from fosdem.
> I'll double check now.

No worries. If you choose to go with your version, you can add:

Reviewed-by: Jeff Layton 



Re: [PATCH v3 04/47] filelock: add some new helper functions

2024-02-05 Thread Jeff Layton
On Mon, 2024-02-05 at 12:36 +0100, Christian Brauner wrote:
> > diff --git a/include/linux/filelock.h b/include/linux/filelock.h
> > index 085ff6ba0653..a814664b1053 100644
> > --- a/include/linux/filelock.h
> > +++ b/include/linux/filelock.h
> > @@ -147,6 +147,29 @@ int fcntl_setlk64(unsigned int, struct file *, 
> > unsigned int,
> >  int fcntl_setlease(unsigned int fd, struct file *filp, int arg);
> >  int fcntl_getlease(struct file *filp);
> >  
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > +static inline bool lock_is_unlock(struct file_lock *fl)
> > +{
> > +   return fl->fl_type == F_UNLCK;
> > +}
> > +
> > +static inline bool lock_is_read(struct file_lock *fl)
> > +{
> > +   return fl->fl_type == F_RDLCK;
> > +}
> > +
> > +static inline bool lock_is_write(struct file_lock *fl)
> > +{
> > +   return fl->fl_type == F_WRLCK;
> > +}
> > +
> > +static inline void locks_wake_up(struct file_lock *fl)
> > +{
> > +   wake_up(>fl_wait);
> > +}
> > +
> > +/* for walking lists of file_locks linked by fl_list */
> > +#define for_each_file_lock(_fl, _head) list_for_each_entry(_fl, _head, 
> > fl_list)
> > +
> 
> This causes a build warning for fs/ceph/ and fs/afs when
> !CONFIG_FILE_LOCKING. I'm about to fold the following diff into this
> patch. The diff looks a bit wonky but essentially I've moved
> lock_is_unlock(), lock_is_{read,write}(), locks_wake_up() and
> for_each_file_lock() out of the ifdef CONFIG_FILE_LOCKING:
> 

I sent a patch for this problem yesterday. Did you not get it?


> diff --git a/include/linux/filelock.h b/include/linux/filelock.h
> index a814664b1053..62be9c6b1e59 100644
> --- a/include/linux/filelock.h
> +++ b/include/linux/filelock.h
> @@ -133,20 +133,6 @@ struct file_lock_context {
> struct list_headflc_lease;
>  };
> 
> -#ifdef CONFIG_FILE_LOCKING
> -int fcntl_getlk(struct file *, unsigned int, struct flock *);
> -int fcntl_setlk(unsigned int, struct file *, unsigned int,
> -   struct flock *);
> -
> -#if BITS_PER_LONG == 32
> -int fcntl_getlk64(struct file *, unsigned int, struct flock64 *);
> -int fcntl_setlk64(unsigned int, struct file *, unsigned int,
> -   struct flock64 *);
> -#endif
> -
> -int fcntl_setlease(unsigned int fd, struct file *filp, int arg);
> -int fcntl_getlease(struct file *filp);
> -
>  static inline bool lock_is_unlock(struct file_lock *fl)
>  {
> return fl->fl_type == F_UNLCK;
> @@ -170,6 +156,20 @@ static inline void locks_wake_up(struct file_lock *fl)
>  /* for walking lists of file_locks linked by fl_list */
>  #define for_each_file_lock(_fl, _head) list_for_each_entry(_fl, _head, 
> fl_list)
> 
> +#ifdef CONFIG_FILE_LOCKING
> +int fcntl_getlk(struct file *, unsigned int, struct flock *);
> +int fcntl_setlk(unsigned int, struct file *, unsigned int,
> +   struct flock *);
> +
> +#if BITS_PER_LONG == 32
> +int fcntl_getlk64(struct file *, unsigned int, struct flock64 *);
> +int fcntl_setlk64(unsigned int, struct file *, unsigned int,
> +   struct flock64 *);
> +#endif
> +
> +int fcntl_setlease(unsigned int fd, struct file *filp, int arg);
> +int fcntl_getlease(struct file *filp);
> +
>  /* fs/locks.c */
>  void locks_free_lock_context(struct inode *inode);
>  void locks_free_lock(struct file_lock *fl);
> 

-- 
Jeff Layton 



[PATCH v3 46/47] filelock: remove temporary compatibility macros

2024-01-31 Thread Jeff Layton
Everything has been converted to access fl_core fields directly, so we
can now drop these.

Signed-off-by: Jeff Layton 
---
 include/linux/filelock.h | 16 
 1 file changed, 16 deletions(-)

diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index fdec838a3ca7..ceadd979e110 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -131,22 +131,6 @@ struct file_lock {
} fl_u;
 } __randomize_layout;
 
-/* Temporary macros to allow building during coccinelle conversion */
-#ifdef _NEED_FILE_LOCK_FIELD_MACROS
-#define fl_list c.flc_list
-#define fl_blocker c.flc_blocker
-#define fl_link c.flc_link
-#define fl_blocked_requests c.flc_blocked_requests
-#define fl_blocked_member c.flc_blocked_member
-#define fl_owner c.flc_owner
-#define fl_flags c.flc_flags
-#define fl_type c.flc_type
-#define fl_pid c.flc_pid
-#define fl_link_cpu c.flc_link_cpu
-#define fl_wait c.flc_wait
-#define fl_file c.flc_file
-#endif
-
 struct file_lock_context {
spinlock_t  flc_lock;
struct list_headflc_flock;

-- 
2.43.0




[PATCH v3 38/47] gfs2: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/gfs2/file.c | 11 +--
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index d06488de1b3b..4c42ada60ae7 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -15,7 +15,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -1441,7 +1440,7 @@ static int gfs2_lock(struct file *file, int cmd, struct 
file_lock *fl)
struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
struct lm_lockstruct *ls = >sd_lockstruct;
 
-   if (!(fl->fl_flags & FL_POSIX))
+   if (!(fl->c.flc_flags & FL_POSIX))
return -ENOLCK;
if (gfs2_withdrawing_or_withdrawn(sdp)) {
if (lock_is_unlock(fl))
@@ -1484,7 +1483,7 @@ static int do_flock(struct file *file, int cmd, struct 
file_lock *fl)
int error = 0;
int sleeptime;
 
-   state = (lock_is_write(fl)) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
+   state = lock_is_write(fl) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
flags = GL_EXACT | GL_NOPID;
if (!IS_SETLKW(cmd))
flags |= LM_FLAG_TRY_1CB;
@@ -1496,8 +1495,8 @@ static int do_flock(struct file *file, int cmd, struct 
file_lock *fl)
if (fl_gh->gh_state == state)
goto out;
locks_init_lock();
-   request.fl_type = F_UNLCK;
-   request.fl_flags = FL_FLOCK;
+   request.c.flc_type = F_UNLCK;
+   request.c.flc_flags = FL_FLOCK;
locks_lock_file_wait(file, );
gfs2_glock_dq(fl_gh);
gfs2_holder_reinit(state, flags, fl_gh);
@@ -1558,7 +1557,7 @@ static void do_unflock(struct file *file, struct 
file_lock *fl)
 
 static int gfs2_flock(struct file *file, int cmd, struct file_lock *fl)
 {
-   if (!(fl->fl_flags & FL_FLOCK))
+   if (!(fl->c.flc_flags & FL_FLOCK))
return -ENOLCK;
 
if (lock_is_unlock(fl)) {

-- 
2.43.0




[PATCH v3 16/47] filelock: drop the IS_* macros

2024-01-31 Thread Jeff Layton
These don't add a lot of value over just open-coding the flag check.

Suggested-by: NeilBrown 
Signed-off-by: Jeff Layton 
---
 fs/locks.c | 32 +++-
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 149070fd3b66..d685c3fdbea5 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -70,12 +70,6 @@
 
 #include 
 
-#define IS_POSIX(fl)   (fl->fl_flags & FL_POSIX)
-#define IS_FLOCK(fl)   (fl->fl_flags & FL_FLOCK)
-#define IS_LEASE(fl)   (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
-#define IS_OFDLCK(fl)  (fl->fl_flags & FL_OFDLCK)
-#define IS_REMOTELCK(fl)   (fl->fl_pid <= 0)
-
 static bool lease_breaking(struct file_lock *fl)
 {
return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
@@ -767,7 +761,7 @@ static void __locks_insert_block(struct file_lock *blocker,
}
waiter->fl_blocker = blocker;
list_add_tail(>fl_blocked_member, 
>fl_blocked_requests);
-   if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))
+   if ((blocker->fl_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
locks_insert_global_blocked(waiter);
 
/* The requests in waiter->fl_blocked are known to conflict with
@@ -999,7 +993,7 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
 * This deadlock detector can't reasonably detect deadlocks with
 * FL_OFDLCK locks, since they aren't owned by a process, per-se.
 */
-   if (IS_OFDLCK(caller_fl))
+   if (caller_fl->fl_flags & FL_OFDLCK)
return 0;
 
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
@@ -2150,10 +2144,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, 
struct pid_namespace *ns)
pid_t vnr;
struct pid *pid;
 
-   if (IS_OFDLCK(fl))
+   if (fl->fl_flags & FL_OFDLCK)
return -1;
-   if (IS_REMOTELCK(fl))
+
+   /* Remote locks report a negative pid value */
+   if (fl->fl_pid <= 0)
return fl->fl_pid;
+
/*
 * If the flock owner process is dead and its pid has been already
 * freed, the translation below won't work, but we still want to show
@@ -2697,7 +2694,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
-   int type;
+   int type = fl->fl_type;
 
pid = locks_translate_pid(fl, proc_pidns);
/*
@@ -2714,19 +2711,21 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
 
-   if (IS_POSIX(fl)) {
+   if (fl->fl_flags & FL_POSIX) {
if (fl->fl_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
-   else if (IS_OFDLCK(fl))
+   else if (fl->fl_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
 
seq_printf(f, " %s ",
 (inode == NULL) ? "*NOINODE*" : "ADVISORY ");
-   } else if (IS_FLOCK(fl)) {
+   } else if (fl->fl_flags & FL_FLOCK) {
seq_puts(f, "FLOCK  ADVISORY  ");
-   } else if (IS_LEASE(fl)) {
+   } else if (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+   type = target_leasetype(fl);
+
if (fl->fl_flags & FL_DELEG)
seq_puts(f, "DELEG  ");
else
@@ -2741,7 +2740,6 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
} else {
seq_puts(f, "UNKNOWN UNKNOWN  ");
}
-   type = IS_LEASE(fl) ? target_leasetype(fl) : fl->fl_type;
 
seq_printf(f, "%s ", (type == F_WRLCK) ? "WRITE" :
 (type == F_RDLCK) ? "READ" : "UNLCK");
@@ -2753,7 +2751,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
} else {
seq_printf(f, "%d :0 ", pid);
}
-   if (IS_POSIX(fl)) {
+   if (fl->fl_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else

-- 
2.43.0




[PATCH v3 17/47] filelock: split common fields into struct file_lock_core

2024-01-31 Thread Jeff Layton
In a future patch, we're going to split file leases into their own
structure. Since a lot of the underlying machinery uses the same fields
move those into a new file_lock_core, and embed that inside struct
file_lock.

For now, add some macros to ensure that we can continue to build while
the conversion is in progress.

Signed-off-by: Jeff Layton 
---
 fs/9p/vfs_file.c  |  1 +
 fs/afs/internal.h |  1 +
 fs/ceph/locks.c   |  1 +
 fs/dlm/plock.c|  1 +
 fs/fuse/file.c|  1 +
 fs/gfs2/file.c|  1 +
 fs/lockd/clntproc.c   |  1 +
 fs/locks.c|  1 +
 fs/nfs/file.c |  1 +
 fs/nfs/nfs4_fs.h  |  1 +
 fs/nfs/write.c|  1 +
 fs/nfsd/netns.h   |  1 +
 fs/ocfs2/locks.c  |  1 +
 fs/ocfs2/stack_user.c |  1 +
 fs/open.c |  2 +-
 fs/posix_acl.c|  4 ++--
 fs/smb/client/cifsglob.h  |  1 +
 fs/smb/client/cifssmb.c   |  1 +
 fs/smb/client/file.c  |  3 ++-
 fs/smb/client/smb2file.c  |  1 +
 fs/smb/server/smb2pdu.c   |  1 +
 fs/smb/server/vfs.c   |  1 +
 include/linux/filelock.h  | 57 ---
 include/linux/lockd/xdr.h |  3 ++-
 24 files changed, 65 insertions(+), 23 deletions(-)

diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index 3df8aa1b5996..a1dabcf73380 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 9c03fcf7ffaa..f5dd428e40f4 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index 80ebe1d6c67d..ce773e9c0b79 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -7,6 +7,7 @@
 
 #include "super.h"
 #include "mds_client.h"
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index 42c596b900d4..fdcddbb96d40 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -4,6 +4,7 @@
  */
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 148a71b8b4d0..2757870ee6ac 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 6c25aea30f1b..d06488de1b3b 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index cc596748e359..1f71260603b7 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/locks.c b/fs/locks.c
index d685c3fdbea5..097254ab35d3 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -48,6 +48,7 @@
  * children.
  *
  */
+#define _NEED_FILE_LOCK_FIELD_MACROS
 
 #include 
 #include 
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 1a7a76d6055b..0b6691e64d27 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -31,6 +31,7 @@
 #include 
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include "delegation.h"
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 581698f1b7b2..752224a48f1c 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -23,6 +23,7 @@
 #define NFS4_MAX_LOOP_ON_RECOVER (10)
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 struct idmap;
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index d16f2b9d1765..13f2e10167ac 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -25,6 +25,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include 
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 74b4360779a1..fd91125208be 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -10,6 +10,7 @@
 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
index ef4fd91b586e..84ad403b5998 100644
--- a/fs/ocfs2/locks.c
+++ b/fs/ocfs2/locks.c
@@ -8,6 +8,7 @@
  */
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index c11406cd87a8..39b7e47a8618 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -9,6 +9,7 @@
 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/open.c b/fs/open.c
index a84d21e55c39..0a73afe04d34 100644
--- a/fs/open.c
+++ b/fs/open.c
@@ -1364,7 +1364,7 @@ struct file *filp_open(const char *filename, int flags, 
umode_t mode)
 {
struct f

[PATCH v3 47/47] filelock: split leases out of struct file_lock

2024-01-31 Thread Jeff Layton
Add a new struct file_lease and move the lease-specific fields from
struct file_lock to it. Convert the appropriate API calls to take
struct file_lease instead, and convert the callers to use them.

There is zero overlap between the lock manager operations for file
locks and the ones for file leases, so split the lease-related
operations off into a new lease_manager_operations struct.

Signed-off-by: Jeff Layton 
---
 fs/libfs.c  |   2 +-
 fs/locks.c  | 123 ++--
 fs/nfs/nfs4_fs.h|   2 +-
 fs/nfs/nfs4file.c   |   2 +-
 fs/nfs/nfs4proc.c   |   4 +-
 fs/nfsd/nfs4layouts.c   |  17 +++---
 fs/nfsd/nfs4state.c |  27 -
 fs/smb/client/cifsfs.c  |   2 +-
 include/linux/filelock.h|  49 ++--
 include/linux/fs.h  |   5 +-
 include/trace/events/filelock.h |  18 +++---
 11 files changed, 153 insertions(+), 98 deletions(-)

diff --git a/fs/libfs.c b/fs/libfs.c
index eec6031b0155..8b67cb4655d5 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -1580,7 +1580,7 @@ EXPORT_SYMBOL(alloc_anon_inode);
  * All arguments are ignored and it just returns -EINVAL.
  */
 int
-simple_nosetlease(struct file *filp, int arg, struct file_lock **flp,
+simple_nosetlease(struct file *filp, int arg, struct file_lease **flp,
  void **priv)
 {
return -EINVAL;
diff --git a/fs/locks.c b/fs/locks.c
index 1a4b01203d3d..33c7f4a8c729 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -74,12 +74,17 @@ static struct file_lock *file_lock(struct file_lock_core 
*flc)
return container_of(flc, struct file_lock, c);
 }
 
-static bool lease_breaking(struct file_lock *fl)
+static struct file_lease *file_lease(struct file_lock_core *flc)
+{
+   return container_of(flc, struct file_lease, c);
+}
+
+static bool lease_breaking(struct file_lease *fl)
 {
return fl->c.flc_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
 }
 
-static int target_leasetype(struct file_lock *fl)
+static int target_leasetype(struct file_lease *fl)
 {
if (fl->c.flc_flags & FL_UNLOCK_PENDING)
return F_UNLCK;
@@ -166,6 +171,7 @@ static DEFINE_SPINLOCK(blocked_lock_lock);
 
 static struct kmem_cache *flctx_cache __ro_after_init;
 static struct kmem_cache *filelock_cache __ro_after_init;
+static struct kmem_cache *filelease_cache __ro_after_init;
 
 static struct file_lock_context *
 locks_get_lock_context(struct inode *inode, int type)
@@ -275,6 +281,18 @@ struct file_lock *locks_alloc_lock(void)
 }
 EXPORT_SYMBOL_GPL(locks_alloc_lock);
 
+/* Allocate an empty lock structure. */
+struct file_lease *locks_alloc_lease(void)
+{
+   struct file_lease *fl = kmem_cache_zalloc(filelease_cache, GFP_KERNEL);
+
+   if (fl)
+   locks_init_lock_heads(>c);
+
+   return fl;
+}
+EXPORT_SYMBOL_GPL(locks_alloc_lease);
+
 void locks_release_private(struct file_lock *fl)
 {
struct file_lock_core *flc = >c;
@@ -336,15 +354,25 @@ void locks_free_lock(struct file_lock *fl)
 }
 EXPORT_SYMBOL(locks_free_lock);
 
+/* Free a lease which is not in use. */
+void locks_free_lease(struct file_lease *fl)
+{
+   kmem_cache_free(filelease_cache, fl);
+}
+EXPORT_SYMBOL(locks_free_lease);
+
 static void
 locks_dispose_list(struct list_head *dispose)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
 
while (!list_empty(dispose)) {
-   fl = list_first_entry(dispose, struct file_lock, c.flc_list);
-   list_del_init(>c.flc_list);
-   locks_free_lock(fl);
+   flc = list_first_entry(dispose, struct file_lock_core, 
flc_list);
+   list_del_init(>flc_list);
+   if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
+   locks_free_lease(file_lease(flc));
+   else
+   locks_free_lock(file_lock(flc));
}
 }
 
@@ -355,6 +383,13 @@ void locks_init_lock(struct file_lock *fl)
 }
 EXPORT_SYMBOL(locks_init_lock);
 
+void locks_init_lease(struct file_lease *fl)
+{
+   memset(fl, 0, sizeof(*fl));
+   locks_init_lock_heads(>c);
+}
+EXPORT_SYMBOL(locks_init_lease);
+
 /*
  * Initialize a new lock from an existing file_lock structure.
  */
@@ -518,14 +553,14 @@ static int flock_to_posix_lock(struct file *filp, struct 
file_lock *fl,
 
 /* default lease lock manager operations */
 static bool
-lease_break_callback(struct file_lock *fl)
+lease_break_callback(struct file_lease *fl)
 {
kill_fasync(>fl_fasync, SIGIO, POLL_MSG);
return false;
 }
 
 static void
-lease_setup(struct file_lock *fl, void **priv)
+lease_setup(struct file_lease *fl, void **priv)
 {
struct file *filp = fl->c.flc_file;
struct fasync_struct *fa = *priv;
@@ -541,7 +576,7 @@ lease_setup(struct file_lock *fl, void **priv)
__f_setown(filp, task_pid(current), P

[PATCH v3 43/47] ocfs2: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/ocfs2/locks.c  | 9 -
 fs/ocfs2/stack_user.c | 1 -
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
index 84ad403b5998..6de944818c56 100644
--- a/fs/ocfs2/locks.c
+++ b/fs/ocfs2/locks.c
@@ -8,7 +8,6 @@
  */
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
@@ -54,8 +53,8 @@ static int ocfs2_do_flock(struct file *file, struct inode 
*inode,
 */
 
locks_init_lock();
-   request.fl_type = F_UNLCK;
-   request.fl_flags = FL_FLOCK;
+   request.c.flc_type = F_UNLCK;
+   request.c.flc_flags = FL_FLOCK;
locks_lock_file_wait(file, );
 
ocfs2_file_unlock(file);
@@ -101,7 +100,7 @@ int ocfs2_flock(struct file *file, int cmd, struct 
file_lock *fl)
struct inode *inode = file->f_mapping->host;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
 
-   if (!(fl->fl_flags & FL_FLOCK))
+   if (!(fl->c.flc_flags & FL_FLOCK))
return -ENOLCK;
 
if ((osb->s_mount_opt & OCFS2_MOUNT_LOCALFLOCKS) ||
@@ -119,7 +118,7 @@ int ocfs2_lock(struct file *file, int cmd, struct file_lock 
*fl)
struct inode *inode = file->f_mapping->host;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
 
-   if (!(fl->fl_flags & FL_POSIX))
+   if (!(fl->c.flc_flags & FL_POSIX))
return -ENOLCK;
 
return ocfs2_plock(osb->cconn, OCFS2_I(inode)->ip_blkno, file, cmd, fl);
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index 39b7e47a8618..c11406cd87a8 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -9,7 +9,6 @@
 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 

-- 
2.43.0




[PATCH v3 45/47] smb/server: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/smb/server/smb2pdu.c | 39 +++
 fs/smb/server/vfs.c |  9 -
 2 files changed, 23 insertions(+), 25 deletions(-)

diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
index 11cc28719582..bec0a846a8d5 100644
--- a/fs/smb/server/smb2pdu.c
+++ b/fs/smb/server/smb2pdu.c
@@ -12,7 +12,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include "glob.h"
@@ -6761,10 +6760,10 @@ struct file_lock *smb_flock_init(struct file *f)
 
locks_init_lock(fl);
 
-   fl->fl_owner = f;
-   fl->fl_pid = current->tgid;
-   fl->fl_file = f;
-   fl->fl_flags = FL_POSIX;
+   fl->c.flc_owner = f;
+   fl->c.flc_pid = current->tgid;
+   fl->c.flc_file = f;
+   fl->c.flc_flags = FL_POSIX;
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
 
@@ -6781,30 +6780,30 @@ static int smb2_set_flock_flags(struct file_lock 
*flock, int flags)
case SMB2_LOCKFLAG_SHARED:
ksmbd_debug(SMB, "received shared request\n");
cmd = F_SETLKW;
-   flock->fl_type = F_RDLCK;
-   flock->fl_flags |= FL_SLEEP;
+   flock->c.flc_type = F_RDLCK;
+   flock->c.flc_flags |= FL_SLEEP;
break;
case SMB2_LOCKFLAG_EXCLUSIVE:
ksmbd_debug(SMB, "received exclusive request\n");
cmd = F_SETLKW;
-   flock->fl_type = F_WRLCK;
-   flock->fl_flags |= FL_SLEEP;
+   flock->c.flc_type = F_WRLCK;
+   flock->c.flc_flags |= FL_SLEEP;
break;
case SMB2_LOCKFLAG_SHARED | SMB2_LOCKFLAG_FAIL_IMMEDIATELY:
ksmbd_debug(SMB,
"received shared & fail immediately request\n");
cmd = F_SETLK;
-   flock->fl_type = F_RDLCK;
+   flock->c.flc_type = F_RDLCK;
break;
case SMB2_LOCKFLAG_EXCLUSIVE | SMB2_LOCKFLAG_FAIL_IMMEDIATELY:
ksmbd_debug(SMB,
"received exclusive & fail immediately request\n");
cmd = F_SETLK;
-   flock->fl_type = F_WRLCK;
+   flock->c.flc_type = F_WRLCK;
break;
case SMB2_LOCKFLAG_UNLOCK:
ksmbd_debug(SMB, "received unlock request\n");
-   flock->fl_type = F_UNLCK;
+   flock->c.flc_type = F_UNLCK;
cmd = F_SETLK;
break;
}
@@ -6848,7 +6847,7 @@ static void smb2_remove_blocked_lock(void **argv)
 static inline bool lock_defer_pending(struct file_lock *fl)
 {
/* check pending lock waiters */
-   return waitqueue_active(>fl_wait);
+   return waitqueue_active(>c.flc_wait);
 }
 
 /**
@@ -6939,8 +6938,8 @@ int smb2_lock(struct ksmbd_work *work)
list_for_each_entry(cmp_lock, _list, llist) {
if (cmp_lock->fl->fl_start <= flock->fl_start &&
cmp_lock->fl->fl_end >= flock->fl_end) {
-   if (cmp_lock->fl->fl_type != F_UNLCK &&
-   flock->fl_type != F_UNLCK) {
+   if (cmp_lock->fl->c.flc_type != F_UNLCK &&
+   flock->c.flc_type != F_UNLCK) {
pr_err("conflict two locks in one 
request\n");
err = -EINVAL;
locks_free_lock(flock);
@@ -6988,12 +6987,12 @@ int smb2_lock(struct ksmbd_work *work)
list_for_each_entry(conn, _list, conns_list) {
spin_lock(>llist_lock);
list_for_each_entry_safe(cmp_lock, tmp2, 
>lock_list, clist) {
-   if (file_inode(cmp_lock->fl->fl_file) !=
-   file_inode(smb_lock->fl->fl_file))
+   if (file_inode(cmp_lock->fl->c.flc_file) !=
+   file_inode(smb_lock->fl->c.flc_file))
continue;
 
if (lock_is_unlock(smb_lock->fl)) {
-   if (cmp_lock->fl->fl_file == 
smb_lock->fl->fl_file &&
+   if (cmp_lock->fl->c.flc_file == 
smb_lock->fl->c.flc_file &&
cmp_lo

[PATCH v3 44/47] smb/client: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/smb/client/cifsglob.h |  1 -
 fs/smb/client/cifssmb.c  |  9 +++
 fs/smb/client/file.c | 67 +---
 fs/smb/client/smb2file.c |  3 +--
 4 files changed, 40 insertions(+), 40 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index 78a994caadaf..16befff4cbb4 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -26,7 +26,6 @@
 #include 
 #include "../common/smb2pdu.h"
 #include "smb2pdu.h"
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #define SMB_PATH_MAX 260
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index e19ecf692c20..5eb83bafc7fd 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -15,7 +15,6 @@
  /* want to reuse a stale file handle and only the caller knows the file info 
*/
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -2067,20 +2066,20 @@ CIFSSMBPosixLock(const unsigned int xid, struct 
cifs_tcon *tcon,
parm_data = (struct cifs_posix_lock *)
((char *)>hdr.Protocol + data_offset);
if (parm_data->lock_type == cpu_to_le16(CIFS_UNLCK))
-   pLockData->fl_type = F_UNLCK;
+   pLockData->c.flc_type = F_UNLCK;
else {
if (parm_data->lock_type ==
cpu_to_le16(CIFS_RDLCK))
-   pLockData->fl_type = F_RDLCK;
+   pLockData->c.flc_type = F_RDLCK;
else if (parm_data->lock_type ==
cpu_to_le16(CIFS_WRLCK))
-   pLockData->fl_type = F_WRLCK;
+   pLockData->c.flc_type = F_WRLCK;
 
pLockData->fl_start = le64_to_cpu(parm_data->start);
pLockData->fl_end = pLockData->fl_start +
(le64_to_cpu(parm_data->length) ?
 le64_to_cpu(parm_data->length) - 1 : 0);
-   pLockData->fl_pid = -le32_to_cpu(parm_data->pid);
+   pLockData->c.flc_pid = -le32_to_cpu(parm_data->pid);
}
}
 
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index 32d3a27236fc..6c4df0d2b641 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -9,7 +9,6 @@
  *
  */
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -1313,20 +1312,20 @@ cifs_lock_test(struct cifsFileInfo *cfile, __u64 
offset, __u64 length,
down_read(>lock_sem);
 
exist = cifs_find_lock_conflict(cfile, offset, length, type,
-   flock->fl_flags, _lock,
+   flock->c.flc_flags, _lock,
CIFS_LOCK_OP);
if (exist) {
flock->fl_start = conf_lock->offset;
flock->fl_end = conf_lock->offset + conf_lock->length - 1;
-   flock->fl_pid = conf_lock->pid;
+   flock->c.flc_pid = conf_lock->pid;
if (conf_lock->type & server->vals->shared_lock_type)
-   flock->fl_type = F_RDLCK;
+   flock->c.flc_type = F_RDLCK;
else
-   flock->fl_type = F_WRLCK;
+   flock->c.flc_type = F_WRLCK;
} else if (!cinode->can_cache_brlcks)
rc = 1;
else
-   flock->fl_type = F_UNLCK;
+   flock->c.flc_type = F_UNLCK;
 
up_read(>lock_sem);
return rc;
@@ -1402,16 +1401,16 @@ cifs_posix_lock_test(struct file *file, struct 
file_lock *flock)
 {
int rc = 0;
struct cifsInodeInfo *cinode = CIFS_I(file_inode(file));
-   unsigned char saved_type = flock->fl_type;
+   unsigned char saved_type = flock->c.flc_type;
 
-   if ((flock->fl_flags & FL_POSIX) == 0)
+   if ((flock->c.flc_flags & FL_POSIX) == 0)
return 1;
 
down_read(>lock_sem);
posix_test_lock(file, flock);
 
if (lock_is_unlock(flock) && !cinode->can_cache_brlcks) {
-   flock->fl_type = saved_type;
+   flock->c.flc_type = saved_type;
rc = 1;
}
 
@@ -1432,7 +1431,7 @@ cifs_posix_lock_set(struct file *file, struct file_lock 
*flock)
struct cifsInodeInfo *cinode = CIFS_I(file_inode(file));
int rc = FILE_LOCK_DEFERRED + 1;
 
-   if ((flock->fl_flags & FL_POSIX) == 

[PATCH v3 15/47] smb/server: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions.

Signed-off-by: Jeff Layton 
---
 fs/smb/server/smb2pdu.c | 6 +++---
 fs/smb/server/vfs.c | 6 +++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
index ba7a72a6a4f4..e170b96d5ac0 100644
--- a/fs/smb/server/smb2pdu.c
+++ b/fs/smb/server/smb2pdu.c
@@ -6841,7 +6841,7 @@ static void smb2_remove_blocked_lock(void **argv)
struct file_lock *flock = (struct file_lock *)argv[0];
 
ksmbd_vfs_posix_lock_unblock(flock);
-   wake_up(>fl_wait);
+   locks_wake_up(flock);
 }
 
 static inline bool lock_defer_pending(struct file_lock *fl)
@@ -6991,7 +6991,7 @@ int smb2_lock(struct ksmbd_work *work)
file_inode(smb_lock->fl->fl_file))
continue;
 
-   if (smb_lock->fl->fl_type == F_UNLCK) {
+   if (lock_is_unlock(smb_lock->fl)) {
if (cmp_lock->fl->fl_file == 
smb_lock->fl->fl_file &&
cmp_lock->start == smb_lock->start 
&&
cmp_lock->end == smb_lock->end &&
@@ -7051,7 +7051,7 @@ int smb2_lock(struct ksmbd_work *work)
}
up_read(_list_lock);
 out_check_cl:
-   if (smb_lock->fl->fl_type == F_UNLCK && nolock) {
+   if (lock_is_unlock(smb_lock->fl) && nolock) {
pr_err("Try to unlock nolocked range\n");
rsp->hdr.Status = STATUS_RANGE_NOT_LOCKED;
goto out;
diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
index a6961bfe3e13..449cfa9ed31c 100644
--- a/fs/smb/server/vfs.c
+++ b/fs/smb/server/vfs.c
@@ -337,16 +337,16 @@ static int check_lock_range(struct file *filp, loff_t 
start, loff_t end,
return 0;
 
spin_lock(>flc_lock);
-   list_for_each_entry(flock, >flc_posix, fl_list) {
+   for_each_file_lock(flock, >flc_posix) {
/* check conflict locks */
if (flock->fl_end >= start && end >= flock->fl_start) {
-   if (flock->fl_type == F_RDLCK) {
+   if (lock_is_read(flock)) {
if (type == WRITE) {
pr_err("not allow write by shared 
lock\n");
error = 1;
goto out;
}
-   } else if (flock->fl_type == F_WRLCK) {
+   } else if (lock_is_write(flock)) {
/* check owner in lock */
if (flock->fl_file != filp) {
error = 1;

-- 
2.43.0




[PATCH v3 42/47] nfsd: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/nfsd/filecache.c|  4 +--
 fs/nfsd/netns.h|  1 -
 fs/nfsd/nfs4callback.c |  2 +-
 fs/nfsd/nfs4layouts.c  | 15 ++-
 fs/nfsd/nfs4state.c| 69 +-
 5 files changed, 46 insertions(+), 45 deletions(-)

diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 9cb7f0c33df5..b86d8494052c 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -662,8 +662,8 @@ nfsd_file_lease_notifier_call(struct notifier_block *nb, 
unsigned long arg,
struct file_lock *fl = data;
 
/* Only close files for F_SETLEASE leases */
-   if (fl->fl_flags & FL_LEASE)
-   nfsd_file_close_inode(file_inode(fl->fl_file));
+   if (fl->c.flc_flags & FL_LEASE)
+   nfsd_file_close_inode(file_inode(fl->c.flc_file));
return 0;
 }
 
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index fd91125208be..74b4360779a1 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -10,7 +10,6 @@
 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 926c29879c6a..32d23ef3e5de 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -674,7 +674,7 @@ static void nfs4_xdr_enc_cb_notify_lock(struct rpc_rqst 
*req,
const struct nfsd4_callback *cb = data;
const struct nfsd4_blocked_lock *nbl =
container_of(cb, struct nfsd4_blocked_lock, nbl_cb);
-   struct nfs4_lockowner *lo = (struct nfs4_lockowner 
*)nbl->nbl_lock.fl_owner;
+   struct nfs4_lockowner *lo = (struct nfs4_lockowner 
*)nbl->nbl_lock.c.flc_owner;
struct nfs4_cb_compound_hdr hdr = {
.ident = 0,
.minorversion = cb->cb_clp->cl_minorversion,
diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
index 5e8096bc5eaa..daae68e526e0 100644
--- a/fs/nfsd/nfs4layouts.c
+++ b/fs/nfsd/nfs4layouts.c
@@ -193,14 +193,15 @@ nfsd4_layout_setlease(struct nfs4_layout_stateid *ls)
return -ENOMEM;
locks_init_lock(fl);
fl->fl_lmops = _layouts_lm_ops;
-   fl->fl_flags = FL_LAYOUT;
-   fl->fl_type = F_RDLCK;
+   fl->c.flc_flags = FL_LAYOUT;
+   fl->c.flc_type = F_RDLCK;
fl->fl_end = OFFSET_MAX;
-   fl->fl_owner = ls;
-   fl->fl_pid = current->tgid;
-   fl->fl_file = ls->ls_file->nf_file;
+   fl->c.flc_owner = ls;
+   fl->c.flc_pid = current->tgid;
+   fl->c.flc_file = ls->ls_file->nf_file;
 
-   status = vfs_setlease(fl->fl_file, fl->fl_type, , NULL);
+   status = vfs_setlease(fl->c.flc_file, fl->c.flc_type, ,
+ NULL);
if (status) {
locks_free_lock(fl);
return status;
@@ -731,7 +732,7 @@ nfsd4_layout_lm_break(struct file_lock *fl)
 * in time:
 */
fl->fl_break_time = 0;
-   nfsd4_recall_file_layout(fl->fl_owner);
+   nfsd4_recall_file_layout(fl->c.flc_owner);
return false;
 }
 
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 83d605ecdcdc..4a1d462209cd 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4924,7 +4924,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation 
*dp)
 static bool
 nfsd_break_deleg_cb(struct file_lock *fl)
 {
-   struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
+   struct nfs4_delegation *dp = (struct nfs4_delegation *) fl->c.flc_owner;
struct nfs4_file *fp = dp->dl_stid.sc_file;
struct nfs4_client *clp = dp->dl_stid.sc_client;
struct nfsd_net *nn;
@@ -4962,7 +4962,7 @@ nfsd_break_deleg_cb(struct file_lock *fl)
  */
 static bool nfsd_breaker_owns_lease(struct file_lock *fl)
 {
-   struct nfs4_delegation *dl = fl->fl_owner;
+   struct nfs4_delegation *dl = fl->c.flc_owner;
struct svc_rqst *rqst;
struct nfs4_client *clp;
 
@@ -4980,7 +4980,7 @@ static int
 nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
 struct list_head *dispose)
 {
-   struct nfs4_delegation *dp = (struct nfs4_delegation *)onlist->fl_owner;
+   struct nfs4_delegation *dp = (struct nfs4_delegation *) 
onlist->c.flc_owner;
struct nfs4_client *clp = dp->dl_stid.sc_client;
 
if (arg & F_UNLCK) {
@@ -5340,12 +5340,12 @@ static struct file_lock *nfs4_alloc_init_lease(struct 
nfs4_delegation *dp,
if (!fl)
return NULL;
fl->fl_lmops = _lease_mng_ops;
-   fl->fl_flags = FL_DELEG;
-   fl->fl_type = flag == NFS4_OPEN_DELEGATE_READ? F_RDLCK: F_WRLCK;
+   fl->c.flc_flags = FL_DELEG;
+   fl->c

[PATCH v3 41/47] nfs: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/nfs/delegation.c |  2 +-
 fs/nfs/file.c   | 19 +--
 fs/nfs/nfs3proc.c   |  2 +-
 fs/nfs/nfs4_fs.h|  1 -
 fs/nfs/nfs4proc.c   | 33 ++---
 fs/nfs/nfs4state.c  |  4 ++--
 fs/nfs/nfs4trace.h  |  4 ++--
 fs/nfs/nfs4xdr.c|  6 +++---
 fs/nfs/write.c  |  5 ++---
 9 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
index ca6985001466..d4a42ce0c7e3 100644
--- a/fs/nfs/delegation.c
+++ b/fs/nfs/delegation.c
@@ -157,7 +157,7 @@ static int nfs_delegation_claim_locks(struct nfs4_state 
*state, const nfs4_state
spin_lock(>flc_lock);
 restart:
for_each_file_lock(fl, list) {
-   if (nfs_file_open_context(fl->fl_file)->state != state)
+   if (nfs_file_open_context(fl->c.flc_file)->state != state)
continue;
spin_unlock(>flc_lock);
status = nfs4_lock_delegation_recall(fl, state, stateid);
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 0b6691e64d27..407c6e15afe2 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -31,7 +31,6 @@
 #include 
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include "delegation.h"
@@ -721,15 +720,15 @@ do_getlk(struct file *filp, int cmd, struct file_lock 
*fl, int is_local)
 {
struct inode *inode = filp->f_mapping->host;
int status = 0;
-   unsigned int saved_type = fl->fl_type;
+   unsigned int saved_type = fl->c.flc_type;
 
/* Try local locking first */
posix_test_lock(filp, fl);
-   if (fl->fl_type != F_UNLCK) {
+   if (fl->c.flc_type != F_UNLCK) {
/* found a conflict */
goto out;
}
-   fl->fl_type = saved_type;
+   fl->c.flc_type = saved_type;
 
if (NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
goto out_noconflict;
@@ -741,7 +740,7 @@ do_getlk(struct file *filp, int cmd, struct file_lock *fl, 
int is_local)
 out:
return status;
 out_noconflict:
-   fl->fl_type = F_UNLCK;
+   fl->c.flc_type = F_UNLCK;
goto out;
 }
 
@@ -766,7 +765,7 @@ do_unlk(struct file *filp, int cmd, struct file_lock *fl, 
int is_local)
 *  If we're signalled while cleaning up locks on process 
exit, we
 *  still need to complete the unlock.
 */
-   if (status < 0 && !(fl->fl_flags & FL_CLOSE))
+   if (status < 0 && !(fl->c.flc_flags & FL_CLOSE))
return status;
}
 
@@ -833,12 +832,12 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock 
*fl)
int is_local = 0;
 
dprintk("NFS: lock(%pD2, t=%x, fl=%x, r=%lld:%lld)\n",
-   filp, fl->fl_type, fl->fl_flags,
+   filp, fl->c.flc_type, fl->c.flc_flags,
(long long)fl->fl_start, (long long)fl->fl_end);
 
nfs_inc_stats(inode, NFSIOS_VFSLOCK);
 
-   if (fl->fl_flags & FL_RECLAIM)
+   if (fl->c.flc_flags & FL_RECLAIM)
return -ENOGRACE;
 
if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FCNTL)
@@ -870,9 +869,9 @@ int nfs_flock(struct file *filp, int cmd, struct file_lock 
*fl)
int is_local = 0;
 
dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n",
-   filp, fl->fl_type, fl->fl_flags);
+   filp, fl->c.flc_type, fl->c.flc_flags);
 
-   if (!(fl->fl_flags & FL_FLOCK))
+   if (!(fl->c.flc_flags & FL_FLOCK))
return -ENOLCK;
 
if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
index 2de66e4e8280..cbbe3f0193b8 100644
--- a/fs/nfs/nfs3proc.c
+++ b/fs/nfs/nfs3proc.c
@@ -963,7 +963,7 @@ nfs3_proc_lock(struct file *filp, int cmd, struct file_lock 
*fl)
struct nfs_open_context *ctx = nfs_file_open_context(filp);
int status;
 
-   if (fl->fl_flags & FL_CLOSE) {
+   if (fl->c.flc_flags & FL_CLOSE) {
l_ctx = nfs_get_lock_context(ctx);
if (IS_ERR(l_ctx))
l_ctx = NULL;
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 752224a48f1c..581698f1b7b2 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -23,7 +23,6 @@
 #define NFS4_MAX_LOOP_ON_RECOVER (10)
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 struct idmap;
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index df54fcd0fa08..91dddcd79004 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -6800,7 +6800,7 @@ st

[PATCH v3 40/47] lockd: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/lockd/clnt4xdr.c | 14 +-
 fs/lockd/clntlock.c |  2 +-
 fs/lockd/clntproc.c | 62 +++
 fs/lockd/clntxdr.c  | 14 +-
 fs/lockd/svc4proc.c | 10 +++
 fs/lockd/svclock.c  | 64 +++--
 fs/lockd/svcproc.c  | 10 +++
 fs/lockd/svcsubs.c  | 20 +++---
 fs/lockd/xdr.c  | 14 +-
 fs/lockd/xdr4.c | 14 +-
 include/linux/lockd/lockd.h |  8 +++---
 include/linux/lockd/xdr.h   |  1 -
 12 files changed, 119 insertions(+), 114 deletions(-)

diff --git a/fs/lockd/clnt4xdr.c b/fs/lockd/clnt4xdr.c
index 8161667c976f..527458db4525 100644
--- a/fs/lockd/clnt4xdr.c
+++ b/fs/lockd/clnt4xdr.c
@@ -243,7 +243,7 @@ static void encode_nlm4_holder(struct xdr_stream *xdr,
u64 l_offset, l_len;
__be32 *p;
 
-   encode_bool(xdr, lock->fl.fl_type == F_RDLCK);
+   encode_bool(xdr, lock->fl.c.flc_type == F_RDLCK);
encode_int32(xdr, lock->svid);
encode_netobj(xdr, lock->oh.data, lock->oh.len);
 
@@ -270,7 +270,7 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, 
struct nlm_res *result)
goto out_overflow;
exclusive = be32_to_cpup(p++);
lock->svid = be32_to_cpup(p);
-   fl->fl_pid = (pid_t)lock->svid;
+   fl->c.flc_pid = (pid_t)lock->svid;
 
error = decode_netobj(xdr, >oh);
if (unlikely(error))
@@ -280,8 +280,8 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, 
struct nlm_res *result)
if (unlikely(p == NULL))
goto out_overflow;
 
-   fl->fl_flags = FL_POSIX;
-   fl->fl_type  = exclusive != 0 ? F_WRLCK : F_RDLCK;
+   fl->c.flc_flags = FL_POSIX;
+   fl->c.flc_type  = exclusive != 0 ? F_WRLCK : F_RDLCK;
p = xdr_decode_hyper(p, _offset);
xdr_decode_hyper(p, _len);
nlm4svc_set_file_lock_range(fl, l_offset, l_len);
@@ -357,7 +357,7 @@ static void nlm4_xdr_enc_testargs(struct rpc_rqst *req,
const struct nlm_lock *lock = >lock;
 
encode_cookie(xdr, >cookie);
-   encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+   encode_bool(xdr, lock->fl.c.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
 }
 
@@ -380,7 +380,7 @@ static void nlm4_xdr_enc_lockargs(struct rpc_rqst *req,
 
encode_cookie(xdr, >cookie);
encode_bool(xdr, args->block);
-   encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+   encode_bool(xdr, lock->fl.c.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
encode_bool(xdr, args->reclaim);
encode_int32(xdr, args->state);
@@ -403,7 +403,7 @@ static void nlm4_xdr_enc_cancargs(struct rpc_rqst *req,
 
encode_cookie(xdr, >cookie);
encode_bool(xdr, args->block);
-   encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+   encode_bool(xdr, lock->fl.c.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
 }
 
diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
index 5d85715be763..a7e0519ec024 100644
--- a/fs/lockd/clntlock.c
+++ b/fs/lockd/clntlock.c
@@ -185,7 +185,7 @@ __be32 nlmclnt_grant(const struct sockaddr *addr, const 
struct nlm_lock *lock)
continue;
if (!rpc_cmp_addr(nlm_addr(block->b_host), addr))
continue;
-   if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_file)), fh) 
!= 0)
+   if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->c.flc_file)), 
fh) != 0)
continue;
/* Alright, we found a lock. Set the return status
 * and wake up the caller
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index 1f71260603b7..cebcc283b7ce 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -12,7 +12,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -134,7 +133,8 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, 
struct file_lock *fl)
char *nodename = req->a_host->h_rpcclnt->cl_nodename;
 
nlmclnt_next_cookie(>cookie);
-   memcpy(>fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct 
nfs_fh));
+   memcpy(>fh, NFS_FH(file_inode(fl->c.flc_file)),
+  sizeof(struct nfs_fh));
lock->caller  = nodename;
lock->oh.data = req->a_owner;
lock->oh.len  = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
@@ -143,7 +143,7 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, 
struct file_lock *fl)
lock->svid = fl->fl_u.nfs_fl.owner->pid;
lock->fl.fl_sta

[PATCH v3 39/47] fuse: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/fuse/file.c | 15 +++
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 2757870ee6ac..c007b0f0c3a7 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -18,7 +18,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
@@ -2510,14 +2509,14 @@ static int convert_fuse_file_lock(struct fuse_conn *fc,
 * translate it into the caller's pid namespace.
 */
rcu_read_lock();
-   fl->fl_pid = pid_nr_ns(find_pid_ns(ffl->pid, fc->pid_ns), 
_pid_ns);
+   fl->c.flc_pid = pid_nr_ns(find_pid_ns(ffl->pid, fc->pid_ns), 
_pid_ns);
rcu_read_unlock();
break;
 
default:
return -EIO;
}
-   fl->fl_type = ffl->type;
+   fl->c.flc_type = ffl->type;
return 0;
 }
 
@@ -2531,10 +2530,10 @@ static void fuse_lk_fill(struct fuse_args *args, struct 
file *file,
 
memset(inarg, 0, sizeof(*inarg));
inarg->fh = ff->fh;
-   inarg->owner = fuse_lock_owner_id(fc, fl->fl_owner);
+   inarg->owner = fuse_lock_owner_id(fc, fl->c.flc_owner);
inarg->lk.start = fl->fl_start;
inarg->lk.end = fl->fl_end;
-   inarg->lk.type = fl->fl_type;
+   inarg->lk.type = fl->c.flc_type;
inarg->lk.pid = pid;
if (flock)
inarg->lk_flags |= FUSE_LK_FLOCK;
@@ -2571,8 +2570,8 @@ static int fuse_setlk(struct file *file, struct file_lock 
*fl, int flock)
struct fuse_mount *fm = get_fuse_mount(inode);
FUSE_ARGS(args);
struct fuse_lk_in inarg;
-   int opcode = (fl->fl_flags & FL_SLEEP) ? FUSE_SETLKW : FUSE_SETLK;
-   struct pid *pid = fl->fl_type != F_UNLCK ? task_tgid(current) : NULL;
+   int opcode = (fl->c.flc_flags & FL_SLEEP) ? FUSE_SETLKW : FUSE_SETLK;
+   struct pid *pid = fl->c.flc_type != F_UNLCK ? task_tgid(current) : NULL;
pid_t pid_nr = pid_nr_ns(pid, fm->fc->pid_ns);
int err;
 
@@ -2582,7 +2581,7 @@ static int fuse_setlk(struct file *file, struct file_lock 
*fl, int flock)
}
 
/* Unlock on close is handled by the flush method */
-   if ((fl->fl_flags & FL_CLOSE_POSIX) == FL_CLOSE_POSIX)
+   if ((fl->c.flc_flags & FL_CLOSE_POSIX) == FL_CLOSE_POSIX)
return 0;
 
fuse_lk_fill(, file, fl, opcode, pid_nr, flock, );

-- 
2.43.0




[PATCH v3 37/47] dlm: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/dlm/plock.c | 45 ++---
 1 file changed, 22 insertions(+), 23 deletions(-)

diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index fdcddbb96d40..9ca83ef70ed1 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -4,7 +4,6 @@
  */
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -139,14 +138,14 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
}
 
op->info.optype = DLM_PLOCK_OP_LOCK;
-   op->info.pid= fl->fl_pid;
-   op->info.ex = (lock_is_write(fl));
-   op->info.wait   = !!(fl->fl_flags & FL_SLEEP);
+   op->info.pid= fl->c.flc_pid;
+   op->info.ex = lock_is_write(fl);
+   op->info.wait   = !!(fl->c.flc_flags & FL_SLEEP);
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
op->info.start  = fl->fl_start;
op->info.end= fl->fl_end;
-   op->info.owner = (__u64)(long)fl->fl_owner;
+   op->info.owner = (__u64)(long) fl->c.flc_owner;
/* async handling */
if (fl->fl_lmops && fl->fl_lmops->lm_grant) {
op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
@@ -259,7 +258,7 @@ static int dlm_plock_callback(struct plock_op *op)
}
 
/* got fs lock; bookkeep locally as well: */
-   flc->fl_flags &= ~FL_SLEEP;
+   flc->c.flc_flags &= ~FL_SLEEP;
if (posix_lock_file(file, flc, NULL)) {
/*
 * This can only happen in the case of kmalloc() failure.
@@ -292,7 +291,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
struct dlm_ls *ls;
struct plock_op *op;
int rv;
-   unsigned char saved_flags = fl->fl_flags;
+   unsigned char saved_flags = fl->c.flc_flags;
 
ls = dlm_find_lockspace_local(lockspace);
if (!ls)
@@ -305,7 +304,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
}
 
/* cause the vfs unlock to return ENOENT if lock is not found */
-   fl->fl_flags |= FL_EXISTS;
+   fl->c.flc_flags |= FL_EXISTS;
 
rv = locks_lock_file_wait(file, fl);
if (rv == -ENOENT) {
@@ -318,14 +317,14 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
}
 
op->info.optype = DLM_PLOCK_OP_UNLOCK;
-   op->info.pid= fl->fl_pid;
+   op->info.pid= fl->c.flc_pid;
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
op->info.start  = fl->fl_start;
op->info.end= fl->fl_end;
-   op->info.owner = (__u64)(long)fl->fl_owner;
+   op->info.owner = (__u64)(long) fl->c.flc_owner;
 
-   if (fl->fl_flags & FL_CLOSE) {
+   if (fl->c.flc_flags & FL_CLOSE) {
op->info.flags |= DLM_PLOCK_FL_CLOSE;
send_op(op);
rv = 0;
@@ -346,7 +345,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
dlm_release_plock_op(op);
 out:
dlm_put_lockspace(ls);
-   fl->fl_flags = saved_flags;
+   fl->c.flc_flags = saved_flags;
return rv;
 }
 EXPORT_SYMBOL_GPL(dlm_posix_unlock);
@@ -376,14 +375,14 @@ int dlm_posix_cancel(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
return -EINVAL;
 
memset(, 0, sizeof(info));
-   info.pid = fl->fl_pid;
-   info.ex = (lock_is_write(fl));
+   info.pid = fl->c.flc_pid;
+   info.ex = lock_is_write(fl);
info.fsid = ls->ls_global_id;
dlm_put_lockspace(ls);
info.number = number;
info.start = fl->fl_start;
info.end = fl->fl_end;
-   info.owner = (__u64)(long)fl->fl_owner;
+   info.owner = (__u64)(long) fl->c.flc_owner;
 
rv = do_lock_cancel();
switch (rv) {
@@ -438,13 +437,13 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, 
struct file *file,
}
 
op->info.optype = DLM_PLOCK_OP_GET;
-   op->info.pid= fl->fl_pid;
-   op->info.ex = (lock_is_write(fl));
+   op->info.pid= fl->c.flc_pid;
+   op->info.ex = lock_is_write(fl);
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
op->info.start  = fl->fl_start;
op->info.end= fl->

[PATCH v3 36/47] ceph: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/ceph/locks.c | 51 ++-
 1 file changed, 26 insertions(+), 25 deletions(-)

diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index ce773e9c0b79..ebf4ac0055dd 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -7,7 +7,6 @@
 
 #include "super.h"
 #include "mds_client.h"
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
@@ -34,7 +33,7 @@ void __init ceph_flock_init(void)
 
 static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
 {
-   struct inode *inode = file_inode(dst->fl_file);
+   struct inode *inode = file_inode(dst->c.flc_file);
atomic_inc(_inode(inode)->i_filelock_ref);
dst->fl_u.ceph.inode = igrab(inode);
 }
@@ -111,17 +110,18 @@ static int ceph_lock_message(u8 lock_type, u16 operation, 
struct inode *inode,
else
length = fl->fl_end - fl->fl_start + 1;
 
-   owner = secure_addr(fl->fl_owner);
+   owner = secure_addr(fl->c.flc_owner);
 
doutc(cl, "rule: %d, op: %d, owner: %llx, pid: %llu, "
"start: %llu, length: %llu, wait: %d, type: %d\n",
-   (int)lock_type, (int)operation, owner, (u64)fl->fl_pid,
-   fl->fl_start, length, wait, fl->fl_type);
+   (int)lock_type, (int)operation, owner,
+   (u64) fl->c.flc_pid,
+   fl->fl_start, length, wait, fl->c.flc_type);
 
req->r_args.filelock_change.rule = lock_type;
req->r_args.filelock_change.type = cmd;
req->r_args.filelock_change.owner = cpu_to_le64(owner);
-   req->r_args.filelock_change.pid = cpu_to_le64((u64)fl->fl_pid);
+   req->r_args.filelock_change.pid = cpu_to_le64((u64) fl->c.flc_pid);
req->r_args.filelock_change.start = cpu_to_le64(fl->fl_start);
req->r_args.filelock_change.length = cpu_to_le64(length);
req->r_args.filelock_change.wait = wait;
@@ -131,13 +131,13 @@ static int ceph_lock_message(u8 lock_type, u16 operation, 
struct inode *inode,
err = ceph_mdsc_wait_request(mdsc, req, wait ?
ceph_lock_wait_for_completion : NULL);
if (!err && operation == CEPH_MDS_OP_GETFILELOCK) {
-   fl->fl_pid = 
-le64_to_cpu(req->r_reply_info.filelock_reply->pid);
+   fl->c.flc_pid = 
-le64_to_cpu(req->r_reply_info.filelock_reply->pid);
if (CEPH_LOCK_SHARED == req->r_reply_info.filelock_reply->type)
-   fl->fl_type = F_RDLCK;
+   fl->c.flc_type = F_RDLCK;
else if (CEPH_LOCK_EXCL == 
req->r_reply_info.filelock_reply->type)
-   fl->fl_type = F_WRLCK;
+   fl->c.flc_type = F_WRLCK;
else
-   fl->fl_type = F_UNLCK;
+   fl->c.flc_type = F_UNLCK;
 
fl->fl_start = 
le64_to_cpu(req->r_reply_info.filelock_reply->start);
length = le64_to_cpu(req->r_reply_info.filelock_reply->start) +
@@ -151,8 +151,8 @@ static int ceph_lock_message(u8 lock_type, u16 operation, 
struct inode *inode,
ceph_mdsc_put_request(req);
doutc(cl, "rule: %d, op: %d, pid: %llu, start: %llu, "
  "length: %llu, wait: %d, type: %d, err code %d\n",
- (int)lock_type, (int)operation, (u64)fl->fl_pid,
- fl->fl_start, length, wait, fl->fl_type, err);
+ (int)lock_type, (int)operation, (u64) fl->c.flc_pid,
+ fl->fl_start, length, wait, fl->c.flc_type, err);
return err;
 }
 
@@ -228,10 +228,10 @@ static int ceph_lock_wait_for_completion(struct 
ceph_mds_client *mdsc,
 static int try_unlock_file(struct file *file, struct file_lock *fl)
 {
int err;
-   unsigned int orig_flags = fl->fl_flags;
-   fl->fl_flags |= FL_EXISTS;
+   unsigned int orig_flags = fl->c.flc_flags;
+   fl->c.flc_flags |= FL_EXISTS;
err = locks_lock_file_wait(file, fl);
-   fl->fl_flags = orig_flags;
+   fl->c.flc_flags = orig_flags;
if (err == -ENOENT) {
if (!(orig_flags & FL_EXISTS))
err = 0;
@@ -254,13 +254,13 @@ int ceph_lock(struct file *file, int cmd, struct 
file_lock *fl)
u8 wait = 0;
u8 lock_cmd;
 
-   if (!(fl->fl_flags & FL_POSIX))
+   if (!(fl->c.flc_flags & FL_POSIX))
return -ENOLCK;
 
if (ceph_inode_is_shutdown(inode))
return -ESTALE;
 
-   doutc(cl, "fl_owner: %p\n", fl-&g

[PATCH v3 35/47] afs: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/afs/flock.c | 38 +++---
 fs/afs/internal.h  |  1 -
 include/trace/events/afs.h |  4 ++--
 3 files changed, 21 insertions(+), 22 deletions(-)

diff --git a/fs/afs/flock.c b/fs/afs/flock.c
index 4eee3d1ca5ad..f0e96a35093f 100644
--- a/fs/afs/flock.c
+++ b/fs/afs/flock.c
@@ -121,16 +121,15 @@ static void afs_next_locker(struct afs_vnode *vnode, int 
error)
 
list_for_each_entry_safe(p, _p, >pending_locks, fl_u.afs.link) {
if (error &&
-   p->fl_type == type &&
-   afs_file_key(p->fl_file) == key) {
+   p->c.flc_type == type &&
+   afs_file_key(p->c.flc_file) == key) {
list_del_init(>fl_u.afs.link);
p->fl_u.afs.state = error;
locks_wake_up(p);
}
 
/* Select the next locker to hand off to. */
-   if (next &&
-   (lock_is_write(next) || lock_is_read(p)))
+   if (next && (lock_is_write(next) || lock_is_read(p)))
continue;
next = p;
}
@@ -464,7 +463,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
 
_enter("{%llx:%llu},%llu-%llu,%u,%u",
   vnode->fid.vid, vnode->fid.vnode,
-  fl->fl_start, fl->fl_end, fl->fl_type, mode);
+  fl->fl_start, fl->fl_end, fl->c.flc_type, mode);
 
fl->fl_ops = _lock_ops;
INIT_LIST_HEAD(>fl_u.afs.link);
@@ -524,7 +523,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
}
 
if (vnode->lock_state == AFS_VNODE_LOCK_NONE &&
-   !(fl->fl_flags & FL_SLEEP)) {
+   !(fl->c.flc_flags & FL_SLEEP)) {
ret = -EAGAIN;
if (type == AFS_LOCK_READ) {
if (vnode->status.lock_count == -1)
@@ -621,7 +620,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
return 0;
 
 lock_is_contended:
-   if (!(fl->fl_flags & FL_SLEEP)) {
+   if (!(fl->c.flc_flags & FL_SLEEP)) {
list_del_init(>fl_u.afs.link);
afs_next_locker(vnode, 0);
ret = -EAGAIN;
@@ -641,7 +640,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
spin_unlock(>lock);
 
trace_afs_flock_ev(vnode, fl, afs_flock_waiting, 0);
-   ret = wait_event_interruptible(fl->fl_wait,
+   ret = wait_event_interruptible(fl->c.flc_wait,
   fl->fl_u.afs.state != AFS_LOCK_PENDING);
trace_afs_flock_ev(vnode, fl, afs_flock_waited, ret);
 
@@ -704,7 +703,8 @@ static int afs_do_unlk(struct file *file, struct file_lock 
*fl)
struct afs_vnode *vnode = AFS_FS_I(file_inode(file));
int ret;
 
-   _enter("{%llx:%llu},%u", vnode->fid.vid, vnode->fid.vnode, fl->fl_type);
+   _enter("{%llx:%llu},%u", vnode->fid.vid, vnode->fid.vnode,
+  fl->c.flc_type);
 
trace_afs_flock_op(vnode, fl, afs_flock_op_unlock);
 
@@ -730,7 +730,7 @@ static int afs_do_getlk(struct file *file, struct file_lock 
*fl)
if (vnode->lock_state == AFS_VNODE_LOCK_DELETED)
return -ENOENT;
 
-   fl->fl_type = F_UNLCK;
+   fl->c.flc_type = F_UNLCK;
 
/* check local lock records first */
posix_test_lock(file, fl);
@@ -743,18 +743,18 @@ static int afs_do_getlk(struct file *file, struct 
file_lock *fl)
lock_count = READ_ONCE(vnode->status.lock_count);
if (lock_count != 0) {
if (lock_count > 0)
-   fl->fl_type = F_RDLCK;
+   fl->c.flc_type = F_RDLCK;
else
-   fl->fl_type = F_WRLCK;
+   fl->c.flc_type = F_WRLCK;
fl->fl_start = 0;
fl->fl_end = OFFSET_MAX;
-   fl->fl_pid = 0;
+   fl->c.flc_pid = 0;
}
}
 
ret = 0;
 error:
-   _leave(" = %d [%hd]", ret, fl->fl_type);
+   _leave(" = %d [%hd]", ret, fl->c.flc_type);
return ret;
 }
 
@@ -769,7 +769,7 @@ int afs_lock(struct file *file, int cmd, struct file_lock 
*fl)
 
_enter("{%llx:%llu},%d,{t=%x,fl=%x,r=%Ld:%Ld}",
   vnode->fid.vid, vnode->fid.vnode, cmd,
-  fl->fl_type, fl->fl_flags,
+

[PATCH v3 34/47] 9p: adapt to breakup of struct file_lock

2024-01-31 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/9p/vfs_file.c | 39 +++
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index a1dabcf73380..abdbbaee5184 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -9,7 +9,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -108,7 +107,7 @@ static int v9fs_file_lock(struct file *filp, int cmd, 
struct file_lock *fl)
 
p9_debug(P9_DEBUG_VFS, "filp: %p lock: %p\n", filp, fl);
 
-   if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) {
+   if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->c.flc_type != F_UNLCK) {
filemap_write_and_wait(inode->i_mapping);
invalidate_mapping_pages(>i_data, 0, -1);
}
@@ -127,7 +126,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
fid = filp->private_data;
BUG_ON(fid == NULL);
 
-   BUG_ON((fl->fl_flags & FL_POSIX) != FL_POSIX);
+   BUG_ON((fl->c.flc_flags & FL_POSIX) != FL_POSIX);
 
res = locks_lock_file_wait(filp, fl);
if (res < 0)
@@ -136,7 +135,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
/* convert posix lock to p9 tlock args */
memset(, 0, sizeof(flock));
/* map the lock type */
-   switch (fl->fl_type) {
+   switch (fl->c.flc_type) {
case F_RDLCK:
flock.type = P9_LOCK_TYPE_RDLCK;
break;
@@ -152,7 +151,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
flock.length = 0;
else
flock.length = fl->fl_end - fl->fl_start + 1;
-   flock.proc_id = fl->fl_pid;
+   flock.proc_id = fl->c.flc_pid;
flock.client_id = fid->clnt->name;
if (IS_SETLKW(cmd))
flock.flags = P9_LOCK_FLAGS_BLOCK;
@@ -207,13 +206,13 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
 * incase server returned error for lock request, revert
 * it locally
 */
-   if (res < 0 && fl->fl_type != F_UNLCK) {
-   unsigned char type = fl->fl_type;
+   if (res < 0 && fl->c.flc_type != F_UNLCK) {
+   unsigned char type = fl->c.flc_type;
 
-   fl->fl_type = F_UNLCK;
+   fl->c.flc_type = F_UNLCK;
/* Even if this fails we want to return the remote error */
locks_lock_file_wait(filp, fl);
-   fl->fl_type = type;
+   fl->c.flc_type = type;
}
if (flock.client_id != fid->clnt->name)
kfree(flock.client_id);
@@ -235,7 +234,7 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
 * if we have a conflicting lock locally, no need to validate
 * with server
 */
-   if (fl->fl_type != F_UNLCK)
+   if (fl->c.flc_type != F_UNLCK)
return res;
 
/* convert posix lock to p9 tgetlock args */
@@ -246,7 +245,7 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
glock.length = 0;
else
glock.length = fl->fl_end - fl->fl_start + 1;
-   glock.proc_id = fl->fl_pid;
+   glock.proc_id = fl->c.flc_pid;
glock.client_id = fid->clnt->name;
 
res = p9_client_getlock_dotl(fid, );
@@ -255,13 +254,13 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
/* map 9p lock type to os lock type */
switch (glock.type) {
case P9_LOCK_TYPE_RDLCK:
-   fl->fl_type = F_RDLCK;
+   fl->c.flc_type = F_RDLCK;
break;
case P9_LOCK_TYPE_WRLCK:
-   fl->fl_type = F_WRLCK;
+   fl->c.flc_type = F_WRLCK;
break;
case P9_LOCK_TYPE_UNLCK:
-   fl->fl_type = F_UNLCK;
+   fl->c.flc_type = F_UNLCK;
break;
}
if (glock.type != P9_LOCK_TYPE_UNLCK) {
@@ -270,7 +269,7 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
fl->fl_end = OFFSET_MAX;
else
fl->fl_end = glock.start + glock.length - 1;
-   fl->fl_pid = -glock.proc_id;
+   fl->c.flc_pid = -glock.proc_id;
}
 out:
if (glock.client_id != fid->clnt->name)
@@ -294,7 +293,7 @@ static int v9fs_file_lock_dotl(struct file *filp, int cmd, 
struct file_lock *fl)
p9_debug(P9_DEBUG_VFS, "filp: %

[PATCH v3 33/47] filelock: convert seqfile handling to use file_lock_core

2024-01-31 Thread Jeff Layton
Reduce some pointer manipulation by just using file_lock_core where we
can and only translate to a file_lock when needed.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 72 +++---
 1 file changed, 36 insertions(+), 36 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 97f6e9163130..1a4b01203d3d 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2718,52 +2718,53 @@ struct locks_iterator {
loff_t  li_pos;
 };
 
-static void lock_get_status(struct seq_file *f, struct file_lock *fl,
+static void lock_get_status(struct seq_file *f, struct file_lock_core *flc,
loff_t id, char *pfx, int repeat)
 {
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
-   int type = fl->c.flc_type;
+   int type = flc->flc_type;
+   struct file_lock *fl = file_lock(flc);
+
+   pid = locks_translate_pid(flc, proc_pidns);
 
-   pid = locks_translate_pid(>c, proc_pidns);
/*
 * If lock owner is dead (and pid is freed) or not visible in current
 * pidns, zero is shown as a pid value. Check lock info from
 * init_pid_ns to get saved lock pid value.
 */
-
-   if (fl->c.flc_file != NULL)
-   inode = file_inode(fl->c.flc_file);
+   if (flc->flc_file != NULL)
+   inode = file_inode(flc->flc_file);
 
seq_printf(f, "%lld: ", id);
 
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
 
-   if (fl->c.flc_flags & FL_POSIX) {
-   if (fl->c.flc_flags & FL_ACCESS)
+   if (flc->flc_flags & FL_POSIX) {
+   if (flc->flc_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
-   else if (fl->c.flc_flags & FL_OFDLCK)
+   else if (flc->flc_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
 
seq_printf(f, " %s ",
 (inode == NULL) ? "*NOINODE*" : "ADVISORY ");
-   } else if (fl->c.flc_flags & FL_FLOCK) {
+   } else if (flc->flc_flags & FL_FLOCK) {
seq_puts(f, "FLOCK  ADVISORY  ");
-   } else if (fl->c.flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+   } else if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
type = target_leasetype(fl);
 
-   if (fl->c.flc_flags & FL_DELEG)
+   if (flc->flc_flags & FL_DELEG)
seq_puts(f, "DELEG  ");
else
seq_puts(f, "LEASE  ");
 
if (lease_breaking(fl))
seq_puts(f, "BREAKING  ");
-   else if (fl->c.flc_file)
+   else if (flc->flc_file)
seq_puts(f, "ACTIVE");
else
seq_puts(f, "BREAKER   ");
@@ -2781,7 +2782,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
} else {
seq_printf(f, "%d :0 ", pid);
}
-   if (fl->c.flc_flags & FL_POSIX) {
+   if (flc->flc_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else
@@ -2791,18 +2792,18 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
}
 }
 
-static struct file_lock *get_next_blocked_member(struct file_lock *node)
+static struct file_lock_core *get_next_blocked_member(struct file_lock_core 
*node)
 {
-   struct file_lock *tmp;
+   struct file_lock_core *tmp;
 
/* NULL node or root node */
-   if (node == NULL || node->c.flc_blocker == NULL)
+   if (node == NULL || node->flc_blocker == NULL)
return NULL;
 
/* Next member in the linked list could be itself */
-   tmp = list_next_entry(node, c.flc_blocked_member);
-   if (list_entry_is_head(tmp, >c.flc_blocker->flc_blocked_requests,
-  c.flc_blocked_member)
+   tmp = list_next_entry(node, flc_blocked_member);
+   if (list_entry_is_head(tmp, >flc_blocker->flc_blocked_requests,
+  flc_blocked_member)
|| tmp == node) {
return NULL;
}
@@ -2813,18 +2814,18 @@ static struct file_lock *get_next_blocked_member(struct 
file_lock *node)
 static int locks_show(struct seq_file *f, void *v)
 {
struct locks_iterator *iter = f->private;
-   struct file_lock *cur, *tmp;
+   st

[PATCH v3 32/47] filelock: convert locks_translate_pid to take file_lock_core

2024-01-31 Thread Jeff Layton
locks_translate_pid is used on both locks and leases, so have that take
struct file_lock_core.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 50d02a53ca75..97f6e9163130 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2169,17 +2169,17 @@ EXPORT_SYMBOL_GPL(vfs_test_lock);
  *
  * Used to translate a fl_pid into a namespace virtual pid number
  */
-static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace 
*ns)
+static pid_t locks_translate_pid(struct file_lock_core *fl, struct 
pid_namespace *ns)
 {
pid_t vnr;
struct pid *pid;
 
-   if (fl->c.flc_flags & FL_OFDLCK)
+   if (fl->flc_flags & FL_OFDLCK)
return -1;
 
/* Remote locks report a negative pid value */
-   if (fl->c.flc_pid <= 0)
-   return fl->c.flc_pid;
+   if (fl->flc_pid <= 0)
+   return fl->flc_pid;
 
/*
 * If the flock owner process is dead and its pid has been already
@@ -2187,10 +2187,10 @@ static pid_t locks_translate_pid(struct file_lock *fl, 
struct pid_namespace *ns)
 * flock owner pid number in init pidns.
 */
if (ns == _pid_ns)
-   return (pid_t) fl->c.flc_pid;
+   return (pid_t) fl->flc_pid;
 
rcu_read_lock();
-   pid = find_pid_ns(fl->c.flc_pid, _pid_ns);
+   pid = find_pid_ns(fl->flc_pid, _pid_ns);
vnr = pid_nr_ns(pid, ns);
rcu_read_unlock();
return vnr;
@@ -2198,7 +2198,7 @@ static pid_t locks_translate_pid(struct file_lock *fl, 
struct pid_namespace *ns)
 
 static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
 {
-   flock->l_pid = locks_translate_pid(fl, task_active_pid_ns(current));
+   flock->l_pid = locks_translate_pid(>c, task_active_pid_ns(current));
 #if BITS_PER_LONG == 32
/*
 * Make sure we can represent the posix lock via
@@ -2220,7 +2220,7 @@ static int posix_lock_to_flock(struct flock *flock, 
struct file_lock *fl)
 #if BITS_PER_LONG == 32
 static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
 {
-   flock->l_pid = locks_translate_pid(fl, task_active_pid_ns(current));
+   flock->l_pid = locks_translate_pid(>c, task_active_pid_ns(current));
flock->l_start = fl->fl_start;
flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
fl->fl_end - fl->fl_start + 1;
@@ -2726,7 +2726,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
int type = fl->c.flc_type;
 
-   pid = locks_translate_pid(fl, proc_pidns);
+   pid = locks_translate_pid(>c, proc_pidns);
/*
 * If lock owner is dead (and pid is freed) or not visible in current
 * pidns, zero is shown as a pid value. Check lock info from
@@ -2819,7 +2819,7 @@ static int locks_show(struct seq_file *f, void *v)
 
cur = hlist_entry(v, struct file_lock, c.flc_link);
 
-   if (locks_translate_pid(cur, proc_pidns) == 0)
+   if (locks_translate_pid(>c, proc_pidns) == 0)
return 0;
 
/* View this crossed linked list as a binary tree, the first member of 
fl_blocked_requests

-- 
2.43.0




[PATCH v3 31/47] filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx

2024-01-31 Thread Jeff Layton
Have these functions take a file_lock_core pointer instead of a
file_lock.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 44 ++--
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 9f3670ba0880..50d02a53ca75 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -824,28 +824,28 @@ static void locks_wake_up_blocks(struct file_lock_core 
*blocker)
 }
 
 static void
-locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
+locks_insert_lock_ctx(struct file_lock_core *fl, struct list_head *before)
 {
-   list_add_tail(>c.flc_list, before);
-   locks_insert_global_locks(>c);
+   list_add_tail(>flc_list, before);
+   locks_insert_global_locks(fl);
 }
 
 static void
-locks_unlink_lock_ctx(struct file_lock *fl)
+locks_unlink_lock_ctx(struct file_lock_core *fl)
 {
-   locks_delete_global_locks(>c);
-   list_del_init(>c.flc_list);
-   locks_wake_up_blocks(>c);
+   locks_delete_global_locks(fl);
+   list_del_init(>flc_list);
+   locks_wake_up_blocks(fl);
 }
 
 static void
-locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
+locks_delete_lock_ctx(struct file_lock_core *fl, struct list_head *dispose)
 {
locks_unlink_lock_ctx(fl);
if (dispose)
-   list_add(>c.flc_list, dispose);
+   list_add(>flc_list, dispose);
else
-   locks_free_lock(fl);
+   locks_free_lock(file_lock(fl));
 }
 
 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
@@ -1072,7 +1072,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
if (request->c.flc_type == fl->c.flc_type)
goto out;
found = true;
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>c, );
break;
}
 
@@ -1097,7 +1097,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
goto out;
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
-   locks_insert_lock_ctx(new_fl, >flc_flock);
+   locks_insert_lock_ctx(_fl->c, >flc_flock);
new_fl = NULL;
error = 0;
 
@@ -1236,7 +1236,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
else
request->fl_end = fl->fl_end;
if (added) {
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>c, );
continue;
}
request = fl;
@@ -1265,7 +1265,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 * one (This may happen several times).
 */
if (added) {
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>c, );
continue;
}
/*
@@ -1282,9 +1282,9 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
locks_move_blocks(new_fl, request);
request = new_fl;
new_fl = NULL;
-   locks_insert_lock_ctx(request,
+   locks_insert_lock_ctx(>c,
  >c.flc_list);
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>c, );
added = true;
}
}
@@ -1313,7 +1313,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
}
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
-   locks_insert_lock_ctx(new_fl, >c.flc_list);
+   locks_insert_lock_ctx(_fl->c, >c.flc_list);
fl = new_fl;
new_fl = NULL;
}
@@ -1325,7 +1325,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
left = new_fl2;
new_fl2 = NULL;
locks_copy_lock(left, right);
-   locks_insert_lock_ctx(left, >c.flc_list);
+   locks_insert_lock_ctx(>c, >c.flc_list);
}
right->fl_start = request->fl_end + 1;
locks_wake_up_blocks(>c);
@@ -1425,7 +1425,7 @@ int lease_modify(struct file_lock *fl, int arg, struct 
list_head *dispose

[PATCH v3 30/47] filelock: convert locks_wake_up_blocks to take a file_lock_core pointer

2024-01-31 Thread Jeff Layton
Have locks_wake_up_blocks take a file_lock_core pointer, and fix up the
callers to pass one in.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 6892511ed89b..9f3670ba0880 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -806,7 +806,7 @@ static void locks_insert_block(struct file_lock_core 
*blocker,
  *
  * Must be called with the inode->flc_lock held!
  */
-static void locks_wake_up_blocks(struct file_lock *blocker)
+static void locks_wake_up_blocks(struct file_lock_core *blocker)
 {
/*
 * Avoid taking global lock if list is empty. This is safe since new
@@ -815,11 +815,11 @@ static void locks_wake_up_blocks(struct file_lock 
*blocker)
 * fl_blocked_requests list does not require the flc_lock, so we must
 * recheck list_empty() after acquiring the blocked_lock_lock.
 */
-   if (list_empty(>c.flc_blocked_requests))
+   if (list_empty(>flc_blocked_requests))
return;
 
spin_lock(_lock_lock);
-   __locks_wake_up_blocks(>c);
+   __locks_wake_up_blocks(blocker);
spin_unlock(_lock_lock);
 }
 
@@ -835,7 +835,7 @@ locks_unlink_lock_ctx(struct file_lock *fl)
 {
locks_delete_global_locks(>c);
list_del_init(>c.flc_list);
-   locks_wake_up_blocks(fl);
+   locks_wake_up_blocks(>c);
 }
 
 static void
@@ -1328,11 +1328,11 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
locks_insert_lock_ctx(left, >c.flc_list);
}
right->fl_start = request->fl_end + 1;
-   locks_wake_up_blocks(right);
+   locks_wake_up_blocks(>c);
}
if (left) {
left->fl_end = request->fl_start - 1;
-   locks_wake_up_blocks(left);
+   locks_wake_up_blocks(>c);
}
  out:
spin_unlock(>flc_lock);
@@ -1414,7 +1414,7 @@ int lease_modify(struct file_lock *fl, int arg, struct 
list_head *dispose)
if (error)
return error;
lease_clear_pending(fl, arg);
-   locks_wake_up_blocks(fl);
+   locks_wake_up_blocks(>c);
if (arg == F_UNLCK) {
struct file *filp = fl->c.flc_file;
 

-- 
2.43.0




[PATCH v3 29/47] filelock: make assign_type helper take a file_lock_core pointer

2024-01-31 Thread Jeff Layton
Have assign_type take struct file_lock_core instead of file_lock.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index c8fd2964dd98..6892511ed89b 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -439,13 +439,13 @@ static void flock_make_lock(struct file *filp, struct 
file_lock *fl, int type)
fl->fl_end = OFFSET_MAX;
 }
 
-static int assign_type(struct file_lock *fl, int type)
+static int assign_type(struct file_lock_core *flc, int type)
 {
switch (type) {
case F_RDLCK:
case F_WRLCK:
case F_UNLCK:
-   fl->c.flc_type = type;
+   flc->flc_type = type;
break;
default:
return -EINVAL;
@@ -497,7 +497,7 @@ static int flock64_to_posix_lock(struct file *filp, struct 
file_lock *fl,
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
 
-   return assign_type(fl, l->l_type);
+   return assign_type(>c, l->l_type);
 }
 
 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
@@ -552,7 +552,7 @@ static const struct lock_manager_operations 
lease_manager_ops = {
  */
 static int lease_init(struct file *filp, int type, struct file_lock *fl)
 {
-   if (assign_type(fl, type) != 0)
+   if (assign_type(>c, type) != 0)
return -EINVAL;
 
fl->c.flc_owner = filp;
@@ -1409,7 +1409,7 @@ static void lease_clear_pending(struct file_lock *fl, int 
arg)
 /* We already had a lease on this file; just change its type */
 int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
 {
-   int error = assign_type(fl, arg);
+   int error = assign_type(>c, arg);
 
if (error)
return error;

-- 
2.43.0




[PATCH v3 28/47] filelock: reorganize locks_delete_block and __locks_insert_block

2024-01-31 Thread Jeff Layton
Rename the old __locks_delete_block to __locks_unlink_lock. Rename
change old locks_delete_block function to __locks_delete_block and
have it take a file_lock_core. Make locks_delete_block a simple wrapper
around __locks_delete_block.

Also, change __locks_insert_block to take struct file_lock_core, and
fix up its callers.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 42 ++
 1 file changed, 22 insertions(+), 20 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index a2be1e0b5a94..c8fd2964dd98 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -659,7 +659,7 @@ static void locks_delete_global_blocked(struct 
file_lock_core *waiter)
  *
  * Must be called with blocked_lock_lock held.
  */
-static void __locks_delete_block(struct file_lock_core *waiter)
+static void __locks_unlink_block(struct file_lock_core *waiter)
 {
locks_delete_global_blocked(waiter);
list_del_init(>flc_blocked_member);
@@ -675,7 +675,7 @@ static void __locks_wake_up_blocks(struct file_lock_core 
*blocker)
  struct file_lock_core, 
flc_blocked_member);
 
fl = file_lock(waiter);
-   __locks_delete_block(waiter);
+   __locks_unlink_block(waiter);
if ((waiter->flc_flags & (FL_POSIX | FL_FLOCK)) &&
fl->fl_lmops && fl->fl_lmops->lm_notify)
fl->fl_lmops->lm_notify(fl);
@@ -691,16 +691,9 @@ static void __locks_wake_up_blocks(struct file_lock_core 
*blocker)
}
 }
 
-/**
- * locks_delete_block - stop waiting for a file lock
- * @waiter: the lock which was waiting
- *
- * lockd/nfsd need to disconnect the lock while working on it.
- */
-int locks_delete_block(struct file_lock *waiter_fl)
+static int __locks_delete_block(struct file_lock_core *waiter)
 {
int status = -ENOENT;
-   struct file_lock_core *waiter = _fl->c;
 
/*
 * If fl_blocker is NULL, it won't be set again as this thread "owns"
@@ -731,7 +724,7 @@ int locks_delete_block(struct file_lock *waiter_fl)
if (waiter->flc_blocker)
status = 0;
__locks_wake_up_blocks(waiter);
-   __locks_delete_block(waiter);
+   __locks_unlink_block(waiter);
 
/*
 * The setting of fl_blocker to NULL marks the "done" point in deleting
@@ -741,6 +734,17 @@ int locks_delete_block(struct file_lock *waiter_fl)
spin_unlock(_lock_lock);
return status;
 }
+
+/**
+ * locks_delete_block - stop waiting for a file lock
+ * @waiter: the lock which was waiting
+ *
+ * lockd/nfsd need to disconnect the lock while working on it.
+ */
+int locks_delete_block(struct file_lock *waiter)
+{
+   return __locks_delete_block(>c);
+}
 EXPORT_SYMBOL(locks_delete_block);
 
 /* Insert waiter into blocker's block list.
@@ -758,13 +762,11 @@ EXPORT_SYMBOL(locks_delete_block);
  * waiters, and add beneath any waiter that blocks the new waiter.
  * Thus wakeups don't happen until needed.
  */
-static void __locks_insert_block(struct file_lock *blocker_fl,
-struct file_lock *waiter_fl,
+static void __locks_insert_block(struct file_lock_core *blocker,
+struct file_lock_core *waiter,
 bool conflict(struct file_lock_core *,
   struct file_lock_core *))
 {
-   struct file_lock_core *blocker = _fl->c;
-   struct file_lock_core *waiter = _fl->c;
struct file_lock_core *flc;
 
BUG_ON(!list_empty(>flc_blocked_member));
@@ -789,8 +791,8 @@ static void __locks_insert_block(struct file_lock 
*blocker_fl,
 }
 
 /* Must be called with flc_lock held. */
-static void locks_insert_block(struct file_lock *blocker,
-  struct file_lock *waiter,
+static void locks_insert_block(struct file_lock_core *blocker,
+  struct file_lock_core *waiter,
   bool conflict(struct file_lock_core *,
 struct file_lock_core *))
 {
@@ -1088,7 +1090,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
if (!(request->c.flc_flags & FL_SLEEP))
goto out;
error = FILE_LOCK_DEFERRED;
-   locks_insert_block(fl, request, flock_locks_conflict);
+   locks_insert_block(>c, >c, flock_locks_conflict);
goto out;
}
if (request->c.flc_flags & FL_ACCESS)
@@ -1182,7 +1184,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
__locks_wake_up_blocks(>c);
if (likely(!posix_locks_deadlock(request, fl))) {
error = FILE_LOCK_DEFERRED;
-  

[PATCH v3 27/47] filelock: clean up locks_delete_block internals

2024-01-31 Thread Jeff Layton
Rework the internals of locks_delete_block to use struct file_lock_core
(mostly just for clarity's sake). The prototype is not changed.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 0aa1c94671cd..a2be1e0b5a94 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -697,9 +697,10 @@ static void __locks_wake_up_blocks(struct file_lock_core 
*blocker)
  *
  * lockd/nfsd need to disconnect the lock while working on it.
  */
-int locks_delete_block(struct file_lock *waiter)
+int locks_delete_block(struct file_lock *waiter_fl)
 {
int status = -ENOENT;
+   struct file_lock_core *waiter = _fl->c;
 
/*
 * If fl_blocker is NULL, it won't be set again as this thread "owns"
@@ -722,21 +723,21 @@ int locks_delete_block(struct file_lock *waiter)
 * no new locks can be inserted into its fl_blocked_requests list, and
 * can avoid doing anything further if the list is empty.
 */
-   if (!smp_load_acquire(>c.flc_blocker) &&
-   list_empty(>c.flc_blocked_requests))
+   if (!smp_load_acquire(>flc_blocker) &&
+   list_empty(>flc_blocked_requests))
return status;
 
spin_lock(_lock_lock);
-   if (waiter->c.flc_blocker)
+   if (waiter->flc_blocker)
status = 0;
-   __locks_wake_up_blocks(>c);
-   __locks_delete_block(>c);
+   __locks_wake_up_blocks(waiter);
+   __locks_delete_block(waiter);
 
/*
 * The setting of fl_blocker to NULL marks the "done" point in deleting
 * a block. Paired with acquire at the top of this function.
 */
-   smp_store_release(>c.flc_blocker, NULL);
+   smp_store_release(>flc_blocker, NULL);
spin_unlock(_lock_lock);
return status;
 }

-- 
2.43.0




[PATCH v3 26/47] filelock: convert fl_blocker to file_lock_core

2024-01-31 Thread Jeff Layton
Both locks and leases deal with fl_blocker. Switch the fl_blocker
pointer in struct file_lock_core to point to the file_lock_core of the
blocker instead of a file_lock structure.

Signed-off-by: Jeff Layton 
---
 fs/locks.c  | 16 
 include/linux/filelock.h|  2 +-
 include/trace/events/filelock.h |  4 ++--
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 0dc1c9da858c..0aa1c94671cd 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -400,7 +400,7 @@ static void locks_move_blocks(struct file_lock *new, struct 
file_lock *fl)
 
/*
 * As ctx->flc_lock is held, new requests cannot be added to
-* ->fl_blocked_requests, so we don't need a lock to check if it
+* ->flc_blocked_requests, so we don't need a lock to check if it
 * is empty.
 */
if (list_empty(>c.flc_blocked_requests))
@@ -410,7 +410,7 @@ static void locks_move_blocks(struct file_lock *new, struct 
file_lock *fl)
 >c.flc_blocked_requests);
list_for_each_entry(f, >c.flc_blocked_requests,
c.flc_blocked_member)
-   f->c.flc_blocker = new;
+   f->c.flc_blocker = >c;
spin_unlock(_lock_lock);
 }
 
@@ -773,7 +773,7 @@ static void __locks_insert_block(struct file_lock 
*blocker_fl,
blocker =  flc;
goto new_blocker;
}
-   waiter->flc_blocker = file_lock(blocker);
+   waiter->flc_blocker = blocker;
list_add_tail(>flc_blocked_member,
  >flc_blocked_requests);
 
@@ -996,7 +996,7 @@ static struct file_lock_core 
*what_owner_is_waiting_for(struct file_lock_core *b
hash_for_each_possible(blocked_hash, flc, flc_link, 
posix_owner_key(blocker)) {
if (posix_same_owner(flc, blocker)) {
while (flc->flc_blocker)
-   flc = >flc_blocker->c;
+   flc = flc->flc_blocker;
return flc;
}
}
@@ -2798,9 +2798,9 @@ static struct file_lock *get_next_blocked_member(struct 
file_lock *node)
 
/* Next member in the linked list could be itself */
tmp = list_next_entry(node, c.flc_blocked_member);
-   if (list_entry_is_head(tmp, 
>c.flc_blocker->c.flc_blocked_requests,
-   c.flc_blocked_member)
-   || tmp == node) {
+   if (list_entry_is_head(tmp, >c.flc_blocker->flc_blocked_requests,
+  c.flc_blocked_member)
+   || tmp == node) {
return NULL;
}
 
@@ -2841,7 +2841,7 @@ static int locks_show(struct seq_file *f, void *v)
tmp = get_next_blocked_member(cur);
/* Fall back to parent node */
while (tmp == NULL && cur->c.flc_blocker != NULL) {
-   cur = cur->c.flc_blocker;
+   cur = file_lock(cur->c.flc_blocker);
level--;
tmp = get_next_blocked_member(cur);
}
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 4dab73bb34b9..fdec838a3ca7 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -87,7 +87,7 @@ bool opens_in_grace(struct net *);
  */
 
 struct file_lock_core {
-   struct file_lock *flc_blocker;  /* The lock that is blocking us */
+   struct file_lock_core *flc_blocker; /* The lock that is blocking us 
*/
struct list_head flc_list;  /* link into file_lock_context */
struct hlist_node flc_link; /* node in global lists */
struct list_head flc_blocked_requests;  /* list of requests with
diff --git a/include/trace/events/filelock.h b/include/trace/events/filelock.h
index 4be341b5ead0..c778061c6249 100644
--- a/include/trace/events/filelock.h
+++ b/include/trace/events/filelock.h
@@ -68,7 +68,7 @@ DECLARE_EVENT_CLASS(filelock_lock,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, blocker)
+   __field(struct file_lock_core *, blocker)
__field(fl_owner_t, owner)
__field(unsigned int, pid)
__field(unsigned int, flags)
@@ -125,7 +125,7 @@ DECLARE_EVENT_CLASS(filelock_lease,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, blocker)
+   __field(struct file_lock_core *, blocker)
__field(fl_owner_t, owner)
__field(unsigned int, flags)
__field(unsigned char, type)

-- 
2.43.0




[PATCH v3 25/47] filelock: convert __locks_insert_block, conflict and deadlock checks to use file_lock_core

2024-01-31 Thread Jeff Layton
Have both __locks_insert_block and the deadlock and conflict checking
functions take a struct file_lock_core pointer instead of a struct
file_lock one. Also, change posix_locks_deadlock to return bool.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 132 +
 1 file changed, 72 insertions(+), 60 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 1e8b943bd7f9..0dc1c9da858c 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -757,39 +757,41 @@ EXPORT_SYMBOL(locks_delete_block);
  * waiters, and add beneath any waiter that blocks the new waiter.
  * Thus wakeups don't happen until needed.
  */
-static void __locks_insert_block(struct file_lock *blocker,
-struct file_lock *waiter,
-bool conflict(struct file_lock *,
-  struct file_lock *))
+static void __locks_insert_block(struct file_lock *blocker_fl,
+struct file_lock *waiter_fl,
+bool conflict(struct file_lock_core *,
+  struct file_lock_core *))
 {
-   struct file_lock *fl;
-   BUG_ON(!list_empty(>c.flc_blocked_member));
+   struct file_lock_core *blocker = _fl->c;
+   struct file_lock_core *waiter = _fl->c;
+   struct file_lock_core *flc;
 
+   BUG_ON(!list_empty(>flc_blocked_member));
 new_blocker:
-   list_for_each_entry(fl, >c.flc_blocked_requests,
-   c.flc_blocked_member)
-   if (conflict(fl, waiter)) {
-   blocker =  fl;
+   list_for_each_entry(flc, >flc_blocked_requests, 
flc_blocked_member)
+   if (conflict(flc, waiter)) {
+   blocker =  flc;
goto new_blocker;
}
-   waiter->c.flc_blocker = blocker;
-   list_add_tail(>c.flc_blocked_member,
- >c.flc_blocked_requests);
-   if ((blocker->c.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
-   locks_insert_global_blocked(>c);
+   waiter->flc_blocker = file_lock(blocker);
+   list_add_tail(>flc_blocked_member,
+ >flc_blocked_requests);
 
-   /* The requests in waiter->fl_blocked are known to conflict with
+   if ((blocker->flc_flags & (FL_POSIX|FL_OFDLCK)) == (FL_POSIX|FL_OFDLCK))
+   locks_insert_global_blocked(waiter);
+
+   /* The requests in waiter->flc_blocked are known to conflict with
 * waiter, but might not conflict with blocker, or the requests
 * and lock which block it.  So they all need to be woken.
 */
-   __locks_wake_up_blocks(>c);
+   __locks_wake_up_blocks(waiter);
 }
 
 /* Must be called with flc_lock held. */
 static void locks_insert_block(struct file_lock *blocker,
   struct file_lock *waiter,
-  bool conflict(struct file_lock *,
-struct file_lock *))
+  bool conflict(struct file_lock_core *,
+struct file_lock_core *))
 {
spin_lock(_lock_lock);
__locks_insert_block(blocker, waiter, conflict);
@@ -846,12 +848,12 @@ locks_delete_lock_ctx(struct file_lock *fl, struct 
list_head *dispose)
 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
  * checks for shared/exclusive status of overlapping locks.
  */
-static bool locks_conflict(struct file_lock *caller_fl,
-  struct file_lock *sys_fl)
+static bool locks_conflict(struct file_lock_core *caller_flc,
+  struct file_lock_core *sys_flc)
 {
-   if (lock_is_write(sys_fl))
+   if (sys_flc->flc_type == F_WRLCK)
return true;
-   if (lock_is_write(caller_fl))
+   if (caller_flc->flc_type == F_WRLCK)
return true;
return false;
 }
@@ -859,20 +861,23 @@ static bool locks_conflict(struct file_lock *caller_fl,
 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
  * checking before calling the locks_conflict().
  */
-static bool posix_locks_conflict(struct file_lock *caller_fl,
-struct file_lock *sys_fl)
+static bool posix_locks_conflict(struct file_lock_core *caller_flc,
+struct file_lock_core *sys_flc)
 {
+   struct file_lock *caller_fl = file_lock(caller_flc);
+   struct file_lock *sys_fl = file_lock(sys_flc);
+
/* POSIX locks owned by the same process do not conflict with
 * each other.
 */
-   if (posix_same_owner(_fl->c, _fl->c))
+   if (posix_same_owner(caller_flc, sys_flc))
return false;
 
/* Check whether they overlap */
if (!locks_overlap(caller_fl, sys_fl))
 

[PATCH v3 24/47] filelock: make __locks_delete_block and __locks_wake_up_blocks take file_lock_core

2024-01-31 Thread Jeff Layton
Convert __locks_delete_block and __locks_wake_up_blocks to take a struct
file_lock_core pointer.

While we could do this in another way, we're going to need to add a
file_lock() helper function later anyway, so introduce and use it now.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 45 +++--
 1 file changed, 27 insertions(+), 18 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index ef67a5a7bae8..1e8b943bd7f9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -69,6 +69,11 @@
 
 #include 
 
+static struct file_lock *file_lock(struct file_lock_core *flc)
+{
+   return container_of(flc, struct file_lock, c);
+}
+
 static bool lease_breaking(struct file_lock *fl)
 {
return fl->c.flc_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
@@ -654,31 +659,35 @@ static void locks_delete_global_blocked(struct 
file_lock_core *waiter)
  *
  * Must be called with blocked_lock_lock held.
  */
-static void __locks_delete_block(struct file_lock *waiter)
+static void __locks_delete_block(struct file_lock_core *waiter)
 {
-   locks_delete_global_blocked(>c);
-   list_del_init(>c.flc_blocked_member);
+   locks_delete_global_blocked(waiter);
+   list_del_init(>flc_blocked_member);
 }
 
-static void __locks_wake_up_blocks(struct file_lock *blocker)
+static void __locks_wake_up_blocks(struct file_lock_core *blocker)
 {
-   while (!list_empty(>c.flc_blocked_requests)) {
-   struct file_lock *waiter;
+   while (!list_empty(>flc_blocked_requests)) {
+   struct file_lock_core *waiter;
+   struct file_lock *fl;
+
+   waiter = list_first_entry(>flc_blocked_requests,
+ struct file_lock_core, 
flc_blocked_member);
 
-   waiter = list_first_entry(>c.flc_blocked_requests,
- struct file_lock, 
c.flc_blocked_member);
+   fl = file_lock(waiter);
__locks_delete_block(waiter);
-   if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
-   waiter->fl_lmops->lm_notify(waiter);
+   if ((waiter->flc_flags & (FL_POSIX | FL_FLOCK)) &&
+   fl->fl_lmops && fl->fl_lmops->lm_notify)
+   fl->fl_lmops->lm_notify(fl);
else
-   locks_wake_up(waiter);
+   locks_wake_up(fl);
 
/*
-* The setting of fl_blocker to NULL marks the "done"
+* The setting of flc_blocker to NULL marks the "done"
 * point in deleting a block. Paired with acquire at the top
 * of locks_delete_block().
 */
-   smp_store_release(>c.flc_blocker, NULL);
+   smp_store_release(>flc_blocker, NULL);
}
 }
 
@@ -720,8 +729,8 @@ int locks_delete_block(struct file_lock *waiter)
spin_lock(_lock_lock);
if (waiter->c.flc_blocker)
status = 0;
-   __locks_wake_up_blocks(waiter);
-   __locks_delete_block(waiter);
+   __locks_wake_up_blocks(>c);
+   __locks_delete_block(>c);
 
/*
 * The setting of fl_blocker to NULL marks the "done" point in deleting
@@ -773,7 +782,7 @@ static void __locks_insert_block(struct file_lock *blocker,
 * waiter, but might not conflict with blocker, or the requests
 * and lock which block it.  So they all need to be woken.
 */
-   __locks_wake_up_blocks(waiter);
+   __locks_wake_up_blocks(>c);
 }
 
 /* Must be called with flc_lock held. */
@@ -805,7 +814,7 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
return;
 
spin_lock(_lock_lock);
-   __locks_wake_up_blocks(blocker);
+   __locks_wake_up_blocks(>c);
spin_unlock(_lock_lock);
 }
 
@@ -1159,7 +1168,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 * Ensure that we don't find any locks blocked on this
 * request during deadlock detection.
 */
-   __locks_wake_up_blocks(request);
+   __locks_wake_up_blocks(>c);
if (likely(!posix_locks_deadlock(request, fl))) {
error = FILE_LOCK_DEFERRED;
__locks_insert_block(fl, request,

-- 
2.43.0




[PATCH v3 23/47] filelock: convert locks_{insert,delete}_global_blocked

2024-01-31 Thread Jeff Layton
Have locks_insert_global_blocked and locks_delete_global_blocked take a
struct file_lock_core pointer.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index fa9b2beed0d7..ef67a5a7bae8 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -635,19 +635,18 @@ posix_owner_key(struct file_lock_core *flc)
return (unsigned long) flc->flc_owner;
 }
 
-static void locks_insert_global_blocked(struct file_lock *waiter)
+static void locks_insert_global_blocked(struct file_lock_core *waiter)
 {
lockdep_assert_held(_lock_lock);
 
-   hash_add(blocked_hash, >c.flc_link,
-posix_owner_key(>c));
+   hash_add(blocked_hash, >flc_link, posix_owner_key(waiter));
 }
 
-static void locks_delete_global_blocked(struct file_lock *waiter)
+static void locks_delete_global_blocked(struct file_lock_core *waiter)
 {
lockdep_assert_held(_lock_lock);
 
-   hash_del(>c.flc_link);
+   hash_del(>flc_link);
 }
 
 /* Remove waiter from blocker's block list.
@@ -657,7 +656,7 @@ static void locks_delete_global_blocked(struct file_lock 
*waiter)
  */
 static void __locks_delete_block(struct file_lock *waiter)
 {
-   locks_delete_global_blocked(waiter);
+   locks_delete_global_blocked(>c);
list_del_init(>c.flc_blocked_member);
 }
 
@@ -768,7 +767,7 @@ static void __locks_insert_block(struct file_lock *blocker,
list_add_tail(>c.flc_blocked_member,
  >c.flc_blocked_requests);
if ((blocker->c.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
-   locks_insert_global_blocked(waiter);
+   locks_insert_global_blocked(>c);
 
/* The requests in waiter->fl_blocked are known to conflict with
 * waiter, but might not conflict with blocker, or the requests

-- 
2.43.0




[PATCH v3 22/47] filelock: make locks_{insert,delete}_global_locks take file_lock_core arg

2024-01-31 Thread Jeff Layton
Convert these functions to take a file_lock_core instead of a file_lock.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 1cfd02562e9f..fa9b2beed0d7 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -596,20 +596,20 @@ static int posix_same_owner(struct file_lock_core *fl1, 
struct file_lock_core *f
 }
 
 /* Must be called with the flc_lock held! */
-static void locks_insert_global_locks(struct file_lock *fl)
+static void locks_insert_global_locks(struct file_lock_core *flc)
 {
struct file_lock_list_struct *fll = this_cpu_ptr(_lock_list);
 
percpu_rwsem_assert_held(_rwsem);
 
spin_lock(>lock);
-   fl->c.flc_link_cpu = smp_processor_id();
-   hlist_add_head(>c.flc_link, >hlist);
+   flc->flc_link_cpu = smp_processor_id();
+   hlist_add_head(>flc_link, >hlist);
spin_unlock(>lock);
 }
 
 /* Must be called with the flc_lock held! */
-static void locks_delete_global_locks(struct file_lock *fl)
+static void locks_delete_global_locks(struct file_lock_core *flc)
 {
struct file_lock_list_struct *fll;
 
@@ -620,12 +620,12 @@ static void locks_delete_global_locks(struct file_lock 
*fl)
 * is done while holding the flc_lock, and new insertions into the list
 * also require that it be held.
 */
-   if (hlist_unhashed(>c.flc_link))
+   if (hlist_unhashed(>flc_link))
return;
 
-   fll = per_cpu_ptr(_lock_list, fl->c.flc_link_cpu);
+   fll = per_cpu_ptr(_lock_list, flc->flc_link_cpu);
spin_lock(>lock);
-   hlist_del_init(>c.flc_link);
+   hlist_del_init(>flc_link);
spin_unlock(>lock);
 }
 
@@ -814,13 +814,13 @@ static void
 locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
 {
list_add_tail(>c.flc_list, before);
-   locks_insert_global_locks(fl);
+   locks_insert_global_locks(>c);
 }
 
 static void
 locks_unlink_lock_ctx(struct file_lock *fl)
 {
-   locks_delete_global_locks(fl);
+   locks_delete_global_locks(>c);
list_del_init(>c.flc_list);
locks_wake_up_blocks(fl);
 }

-- 
2.43.0




[PATCH v3 21/47] filelock: convert posix_owner_key to take file_lock_core arg

2024-01-31 Thread Jeff Layton
Convert posix_owner_key to take struct file_lock_core pointer, and fix
up the callers to pass one in.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 9ff331b55b7a..1cfd02562e9f 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -630,9 +630,9 @@ static void locks_delete_global_locks(struct file_lock *fl)
 }
 
 static unsigned long
-posix_owner_key(struct file_lock *fl)
+posix_owner_key(struct file_lock_core *flc)
 {
-   return (unsigned long) fl->c.flc_owner;
+   return (unsigned long) flc->flc_owner;
 }
 
 static void locks_insert_global_blocked(struct file_lock *waiter)
@@ -640,7 +640,7 @@ static void locks_insert_global_blocked(struct file_lock 
*waiter)
lockdep_assert_held(_lock_lock);
 
hash_add(blocked_hash, >c.flc_link,
-posix_owner_key(waiter));
+posix_owner_key(>c));
 }
 
 static void locks_delete_global_blocked(struct file_lock *waiter)
@@ -977,7 +977,7 @@ static struct file_lock *what_owner_is_waiting_for(struct 
file_lock *block_fl)
 {
struct file_lock *fl;
 
-   hash_for_each_possible(blocked_hash, fl, c.flc_link, 
posix_owner_key(block_fl)) {
+   hash_for_each_possible(blocked_hash, fl, c.flc_link, 
posix_owner_key(_fl->c)) {
if (posix_same_owner(>c, _fl->c)) {
while (fl->c.flc_blocker)
fl = fl->c.flc_blocker;

-- 
2.43.0




[PATCH v3 20/47] filelock: make posix_same_owner take file_lock_core pointers

2024-01-31 Thread Jeff Layton
Change posix_same_owner to take struct file_lock_core pointers, and
convert the callers to pass those in.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 5d25a3f53c9d..9ff331b55b7a 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -590,9 +590,9 @@ static inline int locks_overlap(struct file_lock *fl1, 
struct file_lock *fl2)
 /*
  * Check whether two locks have the same owner.
  */
-static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
+static int posix_same_owner(struct file_lock_core *fl1, struct file_lock_core 
*fl2)
 {
-   return fl1->c.flc_owner == fl2->c.flc_owner;
+   return fl1->flc_owner == fl2->flc_owner;
 }
 
 /* Must be called with the flc_lock held! */
@@ -857,7 +857,7 @@ static bool posix_locks_conflict(struct file_lock 
*caller_fl,
/* POSIX locks owned by the same process do not conflict with
 * each other.
 */
-   if (posix_same_owner(caller_fl, sys_fl))
+   if (posix_same_owner(_fl->c, _fl->c))
return false;
 
/* Check whether they overlap */
@@ -875,7 +875,7 @@ static bool posix_test_locks_conflict(struct file_lock 
*caller_fl,
 {
/* F_UNLCK checks any locks on the same fd. */
if (lock_is_unlock(caller_fl)) {
-   if (!posix_same_owner(caller_fl, sys_fl))
+   if (!posix_same_owner(_fl->c, _fl->c))
return false;
return locks_overlap(caller_fl, sys_fl);
}
@@ -978,7 +978,7 @@ static struct file_lock *what_owner_is_waiting_for(struct 
file_lock *block_fl)
struct file_lock *fl;
 
hash_for_each_possible(blocked_hash, fl, c.flc_link, 
posix_owner_key(block_fl)) {
-   if (posix_same_owner(fl, block_fl)) {
+   if (posix_same_owner(>c, _fl->c)) {
while (fl->c.flc_blocker)
fl = fl->c.flc_blocker;
return fl;
@@ -1005,7 +1005,7 @@ static int posix_locks_deadlock(struct file_lock 
*caller_fl,
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
if (i++ > MAX_DEADLK_ITERATIONS)
return 0;
-   if (posix_same_owner(caller_fl, block_fl))
+   if (posix_same_owner(_fl->c, _fl->c))
return 1;
}
return 0;
@@ -1178,13 +1178,13 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 
/* Find the first old lock with the same owner as the new lock */
list_for_each_entry(fl, >flc_posix, c.flc_list) {
-   if (posix_same_owner(request, fl))
+   if (posix_same_owner(>c, >c))
break;
}
 
/* Process locks with this owner. */
list_for_each_entry_safe_from(fl, tmp, >flc_posix, c.flc_list) {
-   if (!posix_same_owner(request, fl))
+   if (!posix_same_owner(>c, >c))
break;
 
/* Detect adjacent or overlapping regions (if same lock type) */

-- 
2.43.0




[PATCH v3 19/47] filelock: convert more internal functions to use file_lock_core

2024-01-31 Thread Jeff Layton
Convert more internal fs/locks.c functions to take and deal with struct
file_lock_core instead of struct file_lock:

- locks_dump_ctx_list
- locks_check_ctx_file_list
- locks_release_private
- locks_owner_has_blockers

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 51 +--
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index f418c6e31219..5d25a3f53c9d 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -197,13 +197,12 @@ locks_get_lock_context(struct inode *inode, int type)
 static void
 locks_dump_ctx_list(struct list_head *list, char *list_type)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
 
-   list_for_each_entry(fl, list, c.flc_list) {
-   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n", list_type,
-   fl->c.flc_owner, fl->c.flc_flags,
-   fl->c.flc_type, fl->c.flc_pid);
-   }
+   list_for_each_entry(flc, list, flc_list)
+   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n",
+   list_type, flc->flc_owner, flc->flc_flags,
+   flc->flc_type, flc->flc_pid);
 }
 
 static void
@@ -224,20 +223,19 @@ locks_check_ctx_lists(struct inode *inode)
 }
 
 static void
-locks_check_ctx_file_list(struct file *filp, struct list_head *list,
-   char *list_type)
+locks_check_ctx_file_list(struct file *filp, struct list_head *list, char 
*list_type)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
struct inode *inode = file_inode(filp);
 
-   list_for_each_entry(fl, list, c.flc_list)
-   if (fl->c.flc_file == filp)
+   list_for_each_entry(flc, list, flc_list)
+   if (flc->flc_file == filp)
pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
" fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n",
list_type, MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
-   fl->c.flc_owner, fl->c.flc_flags,
-   fl->c.flc_type, fl->c.flc_pid);
+   flc->flc_owner, flc->flc_flags,
+   flc->flc_type, flc->flc_pid);
 }
 
 void
@@ -274,11 +272,13 @@ EXPORT_SYMBOL_GPL(locks_alloc_lock);
 
 void locks_release_private(struct file_lock *fl)
 {
-   BUG_ON(waitqueue_active(>c.flc_wait));
-   BUG_ON(!list_empty(>c.flc_list));
-   BUG_ON(!list_empty(>c.flc_blocked_requests));
-   BUG_ON(!list_empty(>c.flc_blocked_member));
-   BUG_ON(!hlist_unhashed(>c.flc_link));
+   struct file_lock_core *flc = >c;
+
+   BUG_ON(waitqueue_active(>flc_wait));
+   BUG_ON(!list_empty(>flc_list));
+   BUG_ON(!list_empty(>flc_blocked_requests));
+   BUG_ON(!list_empty(>flc_blocked_member));
+   BUG_ON(!hlist_unhashed(>flc_link));
 
if (fl->fl_ops) {
if (fl->fl_ops->fl_release_private)
@@ -288,8 +288,8 @@ void locks_release_private(struct file_lock *fl)
 
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_put_owner) {
-   fl->fl_lmops->lm_put_owner(fl->c.flc_owner);
-   fl->c.flc_owner = NULL;
+   fl->fl_lmops->lm_put_owner(flc->flc_owner);
+   flc->flc_owner = NULL;
}
fl->fl_lmops = NULL;
}
@@ -305,16 +305,15 @@ EXPORT_SYMBOL_GPL(locks_release_private);
  *   %true: @owner has at least one blocker
  *   %false: @owner has no blockers
  */
-bool locks_owner_has_blockers(struct file_lock_context *flctx,
-   fl_owner_t owner)
+bool locks_owner_has_blockers(struct file_lock_context *flctx, fl_owner_t 
owner)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
 
spin_lock(>flc_lock);
-   list_for_each_entry(fl, >flc_posix, c.flc_list) {
-   if (fl->c.flc_owner != owner)
+   list_for_each_entry(flc, >flc_posix, flc_list) {
+   if (flc->flc_owner != owner)
continue;
-   if (!list_empty(>c.flc_blocked_requests)) {
+   if (!list_empty(>flc_blocked_requests)) {
spin_unlock(>flc_lock);
return true;
}

-- 
2.43.0




[PATCH v3 18/47] filelock: have fs/locks.c deal with file_lock_core directly

2024-01-31 Thread Jeff Layton
Convert fs/locks.c to access fl_core fields direcly rather than using
the backward-compatibility macros. Most of this was done with
coccinelle, with a few by-hand fixups.

Signed-off-by: Jeff Layton 
---
 fs/locks.c  | 467 
 include/trace/events/filelock.h |  32 +--
 2 files changed, 254 insertions(+), 245 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 097254ab35d3..f418c6e31219 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -48,8 +48,6 @@
  * children.
  *
  */
-#define _NEED_FILE_LOCK_FIELD_MACROS
-
 #include 
 #include 
 #include 
@@ -73,16 +71,16 @@
 
 static bool lease_breaking(struct file_lock *fl)
 {
-   return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
+   return fl->c.flc_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
 }
 
 static int target_leasetype(struct file_lock *fl)
 {
-   if (fl->fl_flags & FL_UNLOCK_PENDING)
+   if (fl->c.flc_flags & FL_UNLOCK_PENDING)
return F_UNLCK;
-   if (fl->fl_flags & FL_DOWNGRADE_PENDING)
+   if (fl->c.flc_flags & FL_DOWNGRADE_PENDING)
return F_RDLCK;
-   return fl->fl_type;
+   return fl->c.flc_type;
 }
 
 static int leases_enable = 1;
@@ -201,8 +199,10 @@ locks_dump_ctx_list(struct list_head *list, char 
*list_type)
 {
struct file_lock *fl;
 
-   list_for_each_entry(fl, list, fl_list) {
-   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n", list_type, fl->fl_owner, fl->fl_flags, fl->fl_type, fl->fl_pid);
+   list_for_each_entry(fl, list, c.flc_list) {
+   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n", list_type,
+   fl->c.flc_owner, fl->c.flc_flags,
+   fl->c.flc_type, fl->c.flc_pid);
}
 }
 
@@ -230,13 +230,14 @@ locks_check_ctx_file_list(struct file *filp, struct 
list_head *list,
struct file_lock *fl;
struct inode *inode = file_inode(filp);
 
-   list_for_each_entry(fl, list, fl_list)
-   if (fl->fl_file == filp)
+   list_for_each_entry(fl, list, c.flc_list)
+   if (fl->c.flc_file == filp)
pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
" fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n",
list_type, MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
-   fl->fl_owner, fl->fl_flags, fl->fl_type, 
fl->fl_pid);
+   fl->c.flc_owner, fl->c.flc_flags,
+   fl->c.flc_type, fl->c.flc_pid);
 }
 
 void
@@ -250,13 +251,13 @@ locks_free_lock_context(struct inode *inode)
}
 }
 
-static void locks_init_lock_heads(struct file_lock *fl)
+static void locks_init_lock_heads(struct file_lock_core *flc)
 {
-   INIT_HLIST_NODE(>fl_link);
-   INIT_LIST_HEAD(>fl_list);
-   INIT_LIST_HEAD(>fl_blocked_requests);
-   INIT_LIST_HEAD(>fl_blocked_member);
-   init_waitqueue_head(>fl_wait);
+   INIT_HLIST_NODE(>flc_link);
+   INIT_LIST_HEAD(>flc_list);
+   INIT_LIST_HEAD(>flc_blocked_requests);
+   INIT_LIST_HEAD(>flc_blocked_member);
+   init_waitqueue_head(>flc_wait);
 }
 
 /* Allocate an empty lock structure. */
@@ -265,7 +266,7 @@ struct file_lock *locks_alloc_lock(void)
struct file_lock *fl = kmem_cache_zalloc(filelock_cache, GFP_KERNEL);
 
if (fl)
-   locks_init_lock_heads(fl);
+   locks_init_lock_heads(>c);
 
return fl;
 }
@@ -273,11 +274,11 @@ EXPORT_SYMBOL_GPL(locks_alloc_lock);
 
 void locks_release_private(struct file_lock *fl)
 {
-   BUG_ON(waitqueue_active(>fl_wait));
-   BUG_ON(!list_empty(>fl_list));
-   BUG_ON(!list_empty(>fl_blocked_requests));
-   BUG_ON(!list_empty(>fl_blocked_member));
-   BUG_ON(!hlist_unhashed(>fl_link));
+   BUG_ON(waitqueue_active(>c.flc_wait));
+   BUG_ON(!list_empty(>c.flc_list));
+   BUG_ON(!list_empty(>c.flc_blocked_requests));
+   BUG_ON(!list_empty(>c.flc_blocked_member));
+   BUG_ON(!hlist_unhashed(>c.flc_link));
 
if (fl->fl_ops) {
if (fl->fl_ops->fl_release_private)
@@ -287,8 +288,8 @@ void locks_release_private(struct file_lock *fl)
 
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_put_owner) {
-   fl->fl_lmops->lm_put_owner(fl->fl_owner);
-   fl->fl_owner = NULL;
+   fl->fl_lmops->lm_put_owner(fl->c.flc_owner);
+   fl->c.flc_owner = NULL;
}
fl->fl_lmo

[PATCH v3 14/47] smb/client: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions.

Signed-off-by: Jeff Layton 
---
 fs/smb/client/file.c | 12 ++--
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index b75282c204da..27f9ef4e69a8 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -1409,7 +1409,7 @@ cifs_posix_lock_test(struct file *file, struct file_lock 
*flock)
down_read(>lock_sem);
posix_test_lock(file, flock);
 
-   if (flock->fl_type == F_UNLCK && !cinode->can_cache_brlcks) {
+   if (lock_is_unlock(flock) && !cinode->can_cache_brlcks) {
flock->fl_type = saved_type;
rc = 1;
}
@@ -1581,7 +1581,7 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
 
el = locks_to_send.next;
spin_lock(>flc_lock);
-   list_for_each_entry(flock, >flc_posix, fl_list) {
+   for_each_file_lock(flock, >flc_posix) {
if (el == _to_send) {
/*
 * The list ended. We don't have enough allocated
@@ -1591,7 +1591,7 @@ cifs_push_posix_locks(struct cifsFileInfo *cfile)
break;
}
length = cifs_flock_len(flock);
-   if (flock->fl_type == F_RDLCK || flock->fl_type == F_SHLCK)
+   if (lock_is_read(flock) || flock->fl_type == F_SHLCK)
type = CIFS_RDLCK;
else
type = CIFS_WRLCK;
@@ -1681,16 +1681,16 @@ cifs_read_flock(struct file_lock *flock, __u32 *type, 
int *lock, int *unlock,
cifs_dbg(FYI, "Unknown lock flags 0x%x\n", flock->fl_flags);
 
*type = server->vals->large_lock_type;
-   if (flock->fl_type == F_WRLCK) {
+   if (lock_is_write(flock)) {
cifs_dbg(FYI, "F_WRLCK\n");
*type |= server->vals->exclusive_lock_type;
*lock = 1;
-   } else if (flock->fl_type == F_UNLCK) {
+   } else if (lock_is_unlock(flock)) {
cifs_dbg(FYI, "F_UNLCK\n");
*type |= server->vals->unlock_lock_type;
*unlock = 1;
/* Check if unlock includes more than one lock range */
-   } else if (flock->fl_type == F_RDLCK) {
+   } else if (lock_is_read(flock)) {
cifs_dbg(FYI, "F_RDLCK\n");
*type |= server->vals->shared_lock_type;
*lock = 1;

-- 
2.43.0




[PATCH v3 11/47] nfs: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions. Also, in later
patches we're going to introduce some temporary macros with names that
clash with the variable name in nfs4_proc_unlck. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/nfs/delegation.c |  2 +-
 fs/nfs/file.c   |  4 ++--
 fs/nfs/nfs4proc.c   | 12 ++--
 fs/nfs/nfs4state.c  | 18 +-
 fs/nfs/nfs4xdr.c|  2 +-
 fs/nfs/write.c  |  4 ++--
 6 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
index fa1a14def45c..ca6985001466 100644
--- a/fs/nfs/delegation.c
+++ b/fs/nfs/delegation.c
@@ -156,7 +156,7 @@ static int nfs_delegation_claim_locks(struct nfs4_state 
*state, const nfs4_state
list = >flc_posix;
spin_lock(>flc_lock);
 restart:
-   list_for_each_entry(fl, list, fl_list) {
+   for_each_file_lock(fl, list) {
if (nfs_file_open_context(fl->fl_file)->state != state)
continue;
spin_unlock(>flc_lock);
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 8577ccf621f5..1a7a76d6055b 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -851,7 +851,7 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock 
*fl)
 
if (IS_GETLK(cmd))
ret = do_getlk(filp, cmd, fl, is_local);
-   else if (fl->fl_type == F_UNLCK)
+   else if (lock_is_unlock(fl))
ret = do_unlk(filp, cmd, fl, is_local);
else
ret = do_setlk(filp, cmd, fl, is_local);
@@ -878,7 +878,7 @@ int nfs_flock(struct file *filp, int cmd, struct file_lock 
*fl)
is_local = 1;
 
/* We're simulating flock() locks using posix locks on the server */
-   if (fl->fl_type == F_UNLCK)
+   if (lock_is_unlock(fl))
return do_unlk(filp, cmd, fl, is_local);
return do_setlk(filp, cmd, fl, is_local);
 }
diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 23819a756508..df54fcd0fa08 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -7045,7 +7045,7 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int 
cmd, struct file_lock *
struct rpc_task *task;
struct nfs_seqid *(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
int status = 0;
-   unsigned char fl_flags = request->fl_flags;
+   unsigned char saved_flags = request->fl_flags;
 
status = nfs4_set_lock_state(state, request);
/* Unlock _before_ we do the RPC call */
@@ -7080,7 +7080,7 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int 
cmd, struct file_lock *
status = rpc_wait_for_completion_task(task);
rpc_put_task(task);
 out:
-   request->fl_flags = fl_flags;
+   request->fl_flags = saved_flags;
trace_nfs4_unlock(request, state, F_SETLK, status);
return status;
 }
@@ -7398,7 +7398,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int 
cmd, struct file_lock
 {
struct nfs_inode *nfsi = NFS_I(state->inode);
struct nfs4_state_owner *sp = state->owner;
-   unsigned char fl_flags = request->fl_flags;
+   unsigned char flags = request->fl_flags;
int status;
 
request->fl_flags |= FL_ACCESS;
@@ -7410,7 +7410,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int 
cmd, struct file_lock
if (test_bit(NFS_DELEGATED_STATE, >flags)) {
/* Yes: cache locks! */
/* ...but avoid races with delegation recall... */
-   request->fl_flags = fl_flags & ~FL_SLEEP;
+   request->fl_flags = flags & ~FL_SLEEP;
status = locks_lock_inode_wait(state->inode, request);
up_read(>rwsem);
mutex_unlock(>so_delegreturn_mutex);
@@ -7420,7 +7420,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int 
cmd, struct file_lock
mutex_unlock(>so_delegreturn_mutex);
status = _nfs4_do_setlk(state, cmd, request, NFS_LOCK_NEW);
 out:
-   request->fl_flags = fl_flags;
+   request->fl_flags = flags;
return status;
 }
 
@@ -7562,7 +7562,7 @@ nfs4_proc_lock(struct file *filp, int cmd, struct 
file_lock *request)
if (!(IS_SETLK(cmd) || IS_SETLKW(cmd)))
return -EINVAL;
 
-   if (request->fl_type == F_UNLCK) {
+   if (lock_is_unlock(request)) {
if (state != NULL)
return nfs4_proc_unlck(state, cmd, request);
return 0;
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 9a5d911a7edc..16b57735e26a 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -847,15 +847,15 @@ void nfs4_close_sync(struct nfs4_state *state, fmode_t 
fmode)
  */
 static struct nfs4_lock_state *
 __nfs4_find_lock_state(struct nfs4_state *state,
-  fl_owner_t fl_owner, fl_owner_t fl_owner2)
+  fl_owner_t owner, fl_owner_t owner2

[PATCH v3 13/47] ocfs2: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions.

Signed-off-by: Jeff Layton 
---
 fs/ocfs2/locks.c  | 4 ++--
 fs/ocfs2/stack_user.c | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
index f37174e79fad..ef4fd91b586e 100644
--- a/fs/ocfs2/locks.c
+++ b/fs/ocfs2/locks.c
@@ -27,7 +27,7 @@ static int ocfs2_do_flock(struct file *file, struct inode 
*inode,
struct ocfs2_file_private *fp = file->private_data;
struct ocfs2_lock_res *lockres = >fp_flock;
 
-   if (fl->fl_type == F_WRLCK)
+   if (lock_is_write(fl))
level = 1;
if (!IS_SETLKW(cmd))
trylock = 1;
@@ -107,7 +107,7 @@ int ocfs2_flock(struct file *file, int cmd, struct 
file_lock *fl)
ocfs2_mount_local(osb))
return locks_lock_file_wait(file, fl);
 
-   if (fl->fl_type == F_UNLCK)
+   if (lock_is_unlock(fl))
return ocfs2_do_funlock(file, cmd, fl);
else
return ocfs2_do_flock(file, inode, cmd, fl);
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index 9b76ee66aeb2..c11406cd87a8 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -744,7 +744,7 @@ static int user_plock(struct ocfs2_cluster_connection *conn,
return dlm_posix_cancel(conn->cc_lockspace, ino, file, fl);
else if (IS_GETLK(cmd))
return dlm_posix_get(conn->cc_lockspace, ino, file, fl);
-   else if (fl->fl_type == F_UNLCK)
+   else if (lock_is_unlock(fl))
return dlm_posix_unlock(conn->cc_lockspace, ino, file, fl);
else
return dlm_posix_lock(conn->cc_lockspace, ino, file, cmd, fl);

-- 
2.43.0




[PATCH v3 12/47] nfsd: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions. Also, in later
patches we're going to introduce some macros with names that clash with
the variable names in nfsd4_lock. Rename them.

Signed-off-by: Jeff Layton 
---
 fs/nfsd/nfs4state.c | 32 
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 6dc6340e2852..83d605ecdcdc 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -7493,8 +7493,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
int lkflg;
int err;
bool new = false;
-   unsigned char fl_type;
-   unsigned int fl_flags = FL_POSIX;
+   unsigned char type;
+   unsigned int flags = FL_POSIX;
struct net *net = SVC_NET(rqstp);
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
 
@@ -7557,14 +7557,14 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
goto out;
 
if (lock->lk_reclaim)
-   fl_flags |= FL_RECLAIM;
+   flags |= FL_RECLAIM;
 
fp = lock_stp->st_stid.sc_file;
switch (lock->lk_type) {
case NFS4_READW_LT:
if (nfsd4_has_session(cstate) ||
exportfs_lock_op_is_async(sb->s_export_op))
-   fl_flags |= FL_SLEEP;
+   flags |= FL_SLEEP;
fallthrough;
case NFS4_READ_LT:
spin_lock(>fi_lock);
@@ -7572,12 +7572,12 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
if (nf)
get_lock_access(lock_stp, 
NFS4_SHARE_ACCESS_READ);
spin_unlock(>fi_lock);
-   fl_type = F_RDLCK;
+   type = F_RDLCK;
break;
case NFS4_WRITEW_LT:
if (nfsd4_has_session(cstate) ||
exportfs_lock_op_is_async(sb->s_export_op))
-   fl_flags |= FL_SLEEP;
+   flags |= FL_SLEEP;
fallthrough;
case NFS4_WRITE_LT:
spin_lock(>fi_lock);
@@ -7585,7 +7585,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
if (nf)
get_lock_access(lock_stp, 
NFS4_SHARE_ACCESS_WRITE);
spin_unlock(>fi_lock);
-   fl_type = F_WRLCK;
+   type = F_WRLCK;
break;
default:
status = nfserr_inval;
@@ -7605,7 +7605,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
 * on those filesystems:
 */
if (!exportfs_lock_op_is_async(sb->s_export_op))
-   fl_flags &= ~FL_SLEEP;
+   flags &= ~FL_SLEEP;
 
nbl = find_or_allocate_block(lock_sop, >fi_fhandle, nn);
if (!nbl) {
@@ -7615,11 +7615,11 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
}
 
file_lock = >nbl_lock;
-   file_lock->fl_type = fl_type;
+   file_lock->fl_type = type;
file_lock->fl_owner = 
(fl_owner_t)lockowner(nfs4_get_stateowner(_sop->lo_owner));
file_lock->fl_pid = current->tgid;
file_lock->fl_file = nf->nf_file;
-   file_lock->fl_flags = fl_flags;
+   file_lock->fl_flags = flags;
file_lock->fl_lmops = _posix_mng_ops;
file_lock->fl_start = lock->lk_offset;
file_lock->fl_end = last_byte_offset(lock->lk_offset, lock->lk_length);
@@ -7632,7 +7632,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
goto out;
}
 
-   if (fl_flags & FL_SLEEP) {
+   if (flags & FL_SLEEP) {
nbl->nbl_time = ktime_get_boottime_seconds();
spin_lock(>blocked_locks_lock);
list_add_tail(>nbl_list, _sop->lo_blocked);
@@ -7669,7 +7669,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
 out:
if (nbl) {
/* dequeue it if we queued it before */
-   if (fl_flags & FL_SLEEP) {
+   if (flags & FL_SLEEP) {
spin_lock(>blocked_locks_lock);
if (!list_empty(>nbl_list) &&
!list_empty(>nbl_lru)) {
@@ -7928,7 +7928,7 @@ check_for_locks(struct nfs4_file *fp, struct 
nfs4_lockowner *lowner)
 
if (flctx && !list_empty_careful(>flc_posix)) {
spin_lock(>flc_lock);
-   list_for_each_entry(fl, >flc_posix, fl_list) {
+ 

[PATCH v3 10/47] lockd: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions. Also in later
patches we're going to introduce some macros with names that clash with
the variable names in nlmclnt_lock. Rename them.

Signed-off-by: Jeff Layton 
---
 fs/lockd/clntproc.c | 20 ++--
 fs/lockd/svcsubs.c  |  6 +++---
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index fba6c7fa7474..cc596748e359 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -522,8 +522,8 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
struct nlm_host *host = req->a_host;
struct nlm_res  *resp = >a_res;
struct nlm_wait block;
-   unsigned char fl_flags = fl->fl_flags;
-   unsigned char fl_type;
+   unsigned char flags = fl->fl_flags;
+   unsigned char type;
__be32 b_status;
int status = -ENOLCK;
 
@@ -533,7 +533,7 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
 
fl->fl_flags |= FL_ACCESS;
status = do_vfs_lock(fl);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = flags;
if (status < 0)
goto out;
 
@@ -595,7 +595,7 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
if (do_vfs_lock(fl) < 0)
printk(KERN_WARNING "%s: VFS is out of sync with lock 
manager!\n", __func__);
up_read(>h_rwsem);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = flags;
status = 0;
}
if (status < 0)
@@ -605,7 +605,7 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
 * cases NLM_LCK_DENIED is returned for a permanent error.  So
 * turn it into an ENOLCK.
 */
-   if (resp->status == nlm_lck_denied && (fl_flags & FL_SLEEP))
+   if (resp->status == nlm_lck_denied && (flags & FL_SLEEP))
status = -ENOLCK;
else
status = nlm_stat_to_errno(resp->status);
@@ -622,13 +622,13 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
   req->a_host->h_addrlen, req->a_res.status);
dprintk("lockd: lock attempt ended in fatal error.\n"
"   Attempting to unlock.\n");
-   fl_type = fl->fl_type;
+   type = fl->fl_type;
fl->fl_type = F_UNLCK;
down_read(>h_rwsem);
do_vfs_lock(fl);
up_read(>h_rwsem);
-   fl->fl_type = fl_type;
-   fl->fl_flags = fl_flags;
+   fl->fl_type = type;
+   fl->fl_flags = flags;
nlmclnt_async_call(cred, req, NLMPROC_UNLOCK, _unlock_ops);
return status;
 }
@@ -683,7 +683,7 @@ nlmclnt_unlock(struct nlm_rqst *req, struct file_lock *fl)
struct nlm_host *host = req->a_host;
struct nlm_res  *resp = >a_res;
int status;
-   unsigned char fl_flags = fl->fl_flags;
+   unsigned char flags = fl->fl_flags;
 
/*
 * Note: the server is supposed to either grant us the unlock
@@ -694,7 +694,7 @@ nlmclnt_unlock(struct nlm_rqst *req, struct file_lock *fl)
down_read(>h_rwsem);
status = do_vfs_lock(fl);
up_read(>h_rwsem);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = flags;
if (status == -ENOENT) {
status = 0;
goto out;
diff --git a/fs/lockd/svcsubs.c b/fs/lockd/svcsubs.c
index e3b6229e7ae5..2f33c187b876 100644
--- a/fs/lockd/svcsubs.c
+++ b/fs/lockd/svcsubs.c
@@ -73,7 +73,7 @@ static inline unsigned int file_hash(struct nfs_fh *f)
 
 int lock_to_openmode(struct file_lock *lock)
 {
-   return (lock->fl_type == F_WRLCK) ? O_WRONLY : O_RDONLY;
+   return (lock_is_write(lock)) ? O_WRONLY : O_RDONLY;
 }
 
 /*
@@ -218,7 +218,7 @@ nlm_traverse_locks(struct nlm_host *host, struct nlm_file 
*file,
 again:
file->f_locks = 0;
spin_lock(>flc_lock);
-   list_for_each_entry(fl, >flc_posix, fl_list) {
+   for_each_file_lock(fl, >flc_posix) {
if (fl->fl_lmops != _lock_operations)
continue;
 
@@ -272,7 +272,7 @@ nlm_file_inuse(struct nlm_file *file)
 
if (flctx && !list_empty_careful(>flc_posix)) {
spin_lock(>flc_lock);
-   list_for_each_entry(fl, >flc_posix, fl_list) {
+   for_each_file_lock(fl, >flc_posix) {
if (fl->fl_lmops == _lock_operations) {
spin_unlock(>flc_lock);
return 1;

-- 
2.43.0




[PATCH v3 09/47] gfs2: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions.

Signed-off-by: Jeff Layton 
---
 fs/gfs2/file.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 992ca4effb50..6c25aea30f1b 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -1443,7 +1443,7 @@ static int gfs2_lock(struct file *file, int cmd, struct 
file_lock *fl)
if (!(fl->fl_flags & FL_POSIX))
return -ENOLCK;
if (gfs2_withdrawing_or_withdrawn(sdp)) {
-   if (fl->fl_type == F_UNLCK)
+   if (lock_is_unlock(fl))
locks_lock_file_wait(file, fl);
return -EIO;
}
@@ -1451,7 +1451,7 @@ static int gfs2_lock(struct file *file, int cmd, struct 
file_lock *fl)
return dlm_posix_cancel(ls->ls_dlm, ip->i_no_addr, file, fl);
else if (IS_GETLK(cmd))
return dlm_posix_get(ls->ls_dlm, ip->i_no_addr, file, fl);
-   else if (fl->fl_type == F_UNLCK)
+   else if (lock_is_unlock(fl))
return dlm_posix_unlock(ls->ls_dlm, ip->i_no_addr, file, fl);
else
return dlm_posix_lock(ls->ls_dlm, ip->i_no_addr, file, cmd, fl);
@@ -1483,7 +1483,7 @@ static int do_flock(struct file *file, int cmd, struct 
file_lock *fl)
int error = 0;
int sleeptime;
 
-   state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
+   state = (lock_is_write(fl)) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
flags = GL_EXACT | GL_NOPID;
if (!IS_SETLKW(cmd))
flags |= LM_FLAG_TRY_1CB;
@@ -1560,7 +1560,7 @@ static int gfs2_flock(struct file *file, int cmd, struct 
file_lock *fl)
if (!(fl->fl_flags & FL_FLOCK))
return -ENOLCK;
 
-   if (fl->fl_type == F_UNLCK) {
+   if (lock_is_unlock(fl)) {
do_unflock(file, fl);
return 0;
} else {

-- 
2.43.0




[PATCH v3 08/47] dlm: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions. Also, in later
patches we're going to introduce some temporary macros with names that
clash with the variable name in dlm_posix_unlock. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/dlm/plock.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index d814c5121367..42c596b900d4 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -139,7 +139,7 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 number, 
struct file *file,
 
op->info.optype = DLM_PLOCK_OP_LOCK;
op->info.pid= fl->fl_pid;
-   op->info.ex = (fl->fl_type == F_WRLCK);
+   op->info.ex = (lock_is_write(fl));
op->info.wait   = !!(fl->fl_flags & FL_SLEEP);
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
@@ -291,7 +291,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
struct dlm_ls *ls;
struct plock_op *op;
int rv;
-   unsigned char fl_flags = fl->fl_flags;
+   unsigned char saved_flags = fl->fl_flags;
 
ls = dlm_find_lockspace_local(lockspace);
if (!ls)
@@ -345,7 +345,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
dlm_release_plock_op(op);
 out:
dlm_put_lockspace(ls);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = saved_flags;
return rv;
 }
 EXPORT_SYMBOL_GPL(dlm_posix_unlock);
@@ -376,7 +376,7 @@ int dlm_posix_cancel(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
 
memset(, 0, sizeof(info));
info.pid = fl->fl_pid;
-   info.ex = (fl->fl_type == F_WRLCK);
+   info.ex = (lock_is_write(fl));
info.fsid = ls->ls_global_id;
dlm_put_lockspace(ls);
info.number = number;
@@ -438,7 +438,7 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, 
struct file *file,
 
op->info.optype = DLM_PLOCK_OP_GET;
op->info.pid= fl->fl_pid;
-   op->info.ex = (fl->fl_type == F_WRLCK);
+   op->info.ex = (lock_is_write(fl));
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
op->info.start  = fl->fl_start;

-- 
2.43.0




[PATCH v3 06/47] afs: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions. Also, in later
patches we're going to introduce macros that conflict with the variable
name in afs_next_locker. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/afs/flock.c | 26 +-
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/afs/flock.c b/fs/afs/flock.c
index 9c6dea3139f5..4eee3d1ca5ad 100644
--- a/fs/afs/flock.c
+++ b/fs/afs/flock.c
@@ -93,13 +93,13 @@ static void afs_grant_locks(struct afs_vnode *vnode)
bool exclusive = (vnode->lock_type == AFS_LOCK_WRITE);
 
list_for_each_entry_safe(p, _p, >pending_locks, fl_u.afs.link) {
-   if (!exclusive && p->fl_type == F_WRLCK)
+   if (!exclusive && lock_is_write(p))
continue;
 
list_move_tail(>fl_u.afs.link, >granted_locks);
p->fl_u.afs.state = AFS_LOCK_GRANTED;
trace_afs_flock_op(vnode, p, afs_flock_op_grant);
-   wake_up(>fl_wait);
+   locks_wake_up(p);
}
 }
 
@@ -112,25 +112,25 @@ static void afs_next_locker(struct afs_vnode *vnode, int 
error)
 {
struct file_lock *p, *_p, *next = NULL;
struct key *key = vnode->lock_key;
-   unsigned int fl_type = F_RDLCK;
+   unsigned int type = F_RDLCK;
 
_enter("");
 
if (vnode->lock_type == AFS_LOCK_WRITE)
-   fl_type = F_WRLCK;
+   type = F_WRLCK;
 
list_for_each_entry_safe(p, _p, >pending_locks, fl_u.afs.link) {
if (error &&
-   p->fl_type == fl_type &&
+   p->fl_type == type &&
afs_file_key(p->fl_file) == key) {
list_del_init(>fl_u.afs.link);
p->fl_u.afs.state = error;
-   wake_up(>fl_wait);
+   locks_wake_up(p);
}
 
/* Select the next locker to hand off to. */
if (next &&
-   (next->fl_type == F_WRLCK || p->fl_type == F_RDLCK))
+   (lock_is_write(next) || lock_is_read(p)))
continue;
next = p;
}
@@ -142,7 +142,7 @@ static void afs_next_locker(struct afs_vnode *vnode, int 
error)
afs_set_lock_state(vnode, AFS_VNODE_LOCK_SETTING);
next->fl_u.afs.state = AFS_LOCK_YOUR_TRY;
trace_afs_flock_op(vnode, next, afs_flock_op_wake);
-   wake_up(>fl_wait);
+   locks_wake_up(next);
} else {
afs_set_lock_state(vnode, AFS_VNODE_LOCK_NONE);
trace_afs_flock_ev(vnode, NULL, afs_flock_no_lockers, 0);
@@ -166,7 +166,7 @@ static void afs_kill_lockers_enoent(struct afs_vnode *vnode)
   struct file_lock, fl_u.afs.link);
list_del_init(>fl_u.afs.link);
p->fl_u.afs.state = -ENOENT;
-   wake_up(>fl_wait);
+   locks_wake_up(p);
}
 
key_put(vnode->lock_key);
@@ -471,7 +471,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
fl->fl_u.afs.state = AFS_LOCK_PENDING;
 
partial = (fl->fl_start != 0 || fl->fl_end != OFFSET_MAX);
-   type = (fl->fl_type == F_RDLCK) ? AFS_LOCK_READ : AFS_LOCK_WRITE;
+   type = lock_is_read(fl) ? AFS_LOCK_READ : AFS_LOCK_WRITE;
if (mode == afs_flock_mode_write && partial)
type = AFS_LOCK_WRITE;
 
@@ -734,7 +734,7 @@ static int afs_do_getlk(struct file *file, struct file_lock 
*fl)
 
/* check local lock records first */
posix_test_lock(file, fl);
-   if (fl->fl_type == F_UNLCK) {
+   if (lock_is_unlock(fl)) {
/* no local locks; consult the server */
ret = afs_fetch_status(vnode, key, false, NULL);
if (ret < 0)
@@ -778,7 +778,7 @@ int afs_lock(struct file *file, int cmd, struct file_lock 
*fl)
fl->fl_u.afs.debug_id = atomic_inc_return(_file_lock_debug_id);
trace_afs_flock_op(vnode, fl, afs_flock_op_lock);
 
-   if (fl->fl_type == F_UNLCK)
+   if (lock_is_unlock(fl))
ret = afs_do_unlk(file, fl);
else
ret = afs_do_setlk(file, fl);
@@ -820,7 +820,7 @@ int afs_flock(struct file *file, int cmd, struct file_lock 
*fl)
trace_afs_flock_op(vnode, fl, afs_flock_op_flock);
 
/* we're simulating flock() locks using posix locks on the server */
-   if (fl->fl_type == F_UNLCK)
+   if (lock_is_unlock(fl))
ret = afs_do_unlk(file, fl);
else
ret = afs_do_setlk(file, fl);

-- 
2.43.0




[PATCH v3 07/47] ceph: convert to using new filelock helpers

2024-01-31 Thread Jeff Layton
Convert to using the new file locking helper functions.

Signed-off-by: Jeff Layton 
---
 fs/ceph/locks.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index e07ad29ff8b9..80ebe1d6c67d 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -273,19 +273,19 @@ int ceph_lock(struct file *file, int cmd, struct 
file_lock *fl)
}
spin_unlock(>i_ceph_lock);
if (err < 0) {
-   if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK == fl->fl_type)
+   if (op == CEPH_MDS_OP_SETFILELOCK && lock_is_unlock(fl))
posix_lock_file(file, fl, NULL);
return err;
}
 
-   if (F_RDLCK == fl->fl_type)
+   if (lock_is_read(fl))
lock_cmd = CEPH_LOCK_SHARED;
-   else if (F_WRLCK == fl->fl_type)
+   else if (lock_is_write(fl))
lock_cmd = CEPH_LOCK_EXCL;
else
lock_cmd = CEPH_LOCK_UNLOCK;
 
-   if (op == CEPH_MDS_OP_SETFILELOCK && F_UNLCK == fl->fl_type) {
+   if (op == CEPH_MDS_OP_SETFILELOCK && lock_is_unlock(fl)) {
err = try_unlock_file(file, fl);
if (err <= 0)
return err;
@@ -333,7 +333,7 @@ int ceph_flock(struct file *file, int cmd, struct file_lock 
*fl)
}
spin_unlock(>i_ceph_lock);
if (err < 0) {
-   if (F_UNLCK == fl->fl_type)
+   if (lock_is_unlock(fl))
locks_lock_file_wait(file, fl);
return err;
}
@@ -341,14 +341,14 @@ int ceph_flock(struct file *file, int cmd, struct 
file_lock *fl)
if (IS_SETLKW(cmd))
wait = 1;
 
-   if (F_RDLCK == fl->fl_type)
+   if (lock_is_read(fl))
lock_cmd = CEPH_LOCK_SHARED;
-   else if (F_WRLCK == fl->fl_type)
+   else if (lock_is_write(fl))
lock_cmd = CEPH_LOCK_EXCL;
else
lock_cmd = CEPH_LOCK_UNLOCK;
 
-   if (F_UNLCK == fl->fl_type) {
+   if (lock_is_unlock(fl)) {
err = try_unlock_file(file, fl);
if (err <= 0)
return err;
@@ -385,9 +385,9 @@ void ceph_count_locks(struct inode *inode, int 
*fcntl_count, int *flock_count)
ctx = locks_inode_context(inode);
if (ctx) {
spin_lock(>flc_lock);
-   list_for_each_entry(lock, >flc_posix, fl_list)
+   for_each_file_lock(lock, >flc_posix)
++(*fcntl_count);
-   list_for_each_entry(lock, >flc_flock, fl_list)
+   for_each_file_lock(lock, >flc_flock)
++(*flock_count);
spin_unlock(>flc_lock);
}
@@ -453,7 +453,7 @@ int ceph_encode_locks_to_buffer(struct inode *inode,
return 0;
 
spin_lock(>flc_lock);
-   list_for_each_entry(lock, >flc_posix, fl_list) {
+   for_each_file_lock(lock, >flc_posix) {
++seen_fcntl;
if (seen_fcntl > num_fcntl_locks) {
err = -ENOSPC;
@@ -464,7 +464,7 @@ int ceph_encode_locks_to_buffer(struct inode *inode,
goto fail;
++l;
}
-   list_for_each_entry(lock, >flc_flock, fl_list) {
+   for_each_file_lock(lock, >flc_flock) {
++seen_flock;
if (seen_flock > num_flock_locks) {
err = -ENOSPC;

-- 
2.43.0




[PATCH v3 05/47] 9p: rename fl_type variable in v9fs_file_do_lock

2024-01-31 Thread Jeff Layton
In later patches, we're going to introduce some macros that conflict
with the variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/9p/vfs_file.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index bae330c2f0cf..3df8aa1b5996 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -121,7 +121,6 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
struct p9_fid *fid;
uint8_t status = P9_LOCK_ERROR;
int res = 0;
-   unsigned char fl_type;
struct v9fs_session_info *v9ses;
 
fid = filp->private_data;
@@ -208,11 +207,12 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
 * it locally
 */
if (res < 0 && fl->fl_type != F_UNLCK) {
-   fl_type = fl->fl_type;
+   unsigned char type = fl->fl_type;
+
fl->fl_type = F_UNLCK;
/* Even if this fails we want to return the remote error */
locks_lock_file_wait(filp, fl);
-   fl->fl_type = fl_type;
+   fl->fl_type = type;
}
if (flock.client_id != fid->clnt->name)
kfree(flock.client_id);

-- 
2.43.0




[PATCH v3 04/47] filelock: add some new helper functions

2024-01-31 Thread Jeff Layton
In later patches we're going to embed some common fields into a new
structure inside struct file_lock. Smooth the transition by adding some
new helper functions, and converting the core file locking code to use
them.

Signed-off-by: Jeff Layton 
---
 fs/locks.c   | 18 +-
 include/linux/filelock.h | 23 +++
 2 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 1eceaa56e47f..149070fd3b66 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -674,7 +674,7 @@ static void __locks_wake_up_blocks(struct file_lock 
*blocker)
if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
waiter->fl_lmops->lm_notify(waiter);
else
-   wake_up(>fl_wait);
+   locks_wake_up(waiter);
 
/*
 * The setting of fl_blocker to NULL marks the "done"
@@ -841,9 +841,9 @@ locks_delete_lock_ctx(struct file_lock *fl, struct 
list_head *dispose)
 static bool locks_conflict(struct file_lock *caller_fl,
   struct file_lock *sys_fl)
 {
-   if (sys_fl->fl_type == F_WRLCK)
+   if (lock_is_write(sys_fl))
return true;
-   if (caller_fl->fl_type == F_WRLCK)
+   if (lock_is_write(caller_fl))
return true;
return false;
 }
@@ -874,7 +874,7 @@ static bool posix_test_locks_conflict(struct file_lock 
*caller_fl,
  struct file_lock *sys_fl)
 {
/* F_UNLCK checks any locks on the same fd. */
-   if (caller_fl->fl_type == F_UNLCK) {
+   if (lock_is_unlock(caller_fl)) {
if (!posix_same_owner(caller_fl, sys_fl))
return false;
return locks_overlap(caller_fl, sys_fl);
@@ -1055,7 +1055,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
break;
}
 
-   if (request->fl_type == F_UNLCK) {
+   if (lock_is_unlock(request)) {
if ((request->fl_flags & FL_EXISTS) && !found)
error = -ENOENT;
goto out;
@@ -1107,7 +1107,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 
ctx = locks_get_lock_context(inode, request->fl_type);
if (!ctx)
-   return (request->fl_type == F_UNLCK) ? 0 : -ENOMEM;
+   return lock_is_unlock(request) ? 0 : -ENOMEM;
 
/*
 * We may need two file_lock structures for this operation,
@@ -1228,7 +1228,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
continue;
if (fl->fl_start > request->fl_end)
break;
-   if (request->fl_type == F_UNLCK)
+   if (lock_is_unlock(request))
added = true;
if (fl->fl_start < request->fl_start)
left = fl;
@@ -1279,7 +1279,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 
error = 0;
if (!added) {
-   if (request->fl_type == F_UNLCK) {
+   if (lock_is_unlock(request)) {
if (request->fl_flags & FL_EXISTS)
error = -ENOENT;
goto out;
@@ -1608,7 +1608,7 @@ void lease_get_mtime(struct inode *inode, struct 
timespec64 *time)
spin_lock(>flc_lock);
fl = list_first_entry_or_null(>flc_lease,
  struct file_lock, fl_list);
-   if (fl && (fl->fl_type == F_WRLCK))
+   if (fl && lock_is_write(fl))
has_lease = true;
spin_unlock(>flc_lock);
}
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 085ff6ba0653..a814664b1053 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -147,6 +147,29 @@ int fcntl_setlk64(unsigned int, struct file *, unsigned 
int,
 int fcntl_setlease(unsigned int fd, struct file *filp, int arg);
 int fcntl_getlease(struct file *filp);
 
+static inline bool lock_is_unlock(struct file_lock *fl)
+{
+   return fl->fl_type == F_UNLCK;
+}
+
+static inline bool lock_is_read(struct file_lock *fl)
+{
+   return fl->fl_type == F_RDLCK;
+}
+
+static inline bool lock_is_write(struct file_lock *fl)
+{
+   return fl->fl_type == F_WRLCK;
+}
+
+static inline void locks_wake_up(struct file_lock *fl)
+{
+   wake_up(>fl_wait);
+}
+
+/* for walking lists of file_locks linked by fl_list */
+#define for_each_file_lock(_fl, _head) list_for_each_entry(_fl, _head, fl_list)
+
 /* fs/locks.c */
 void locks_free_lock_context(struct inode *inode);
 void locks_free_lock(struct file_lock *fl);

-- 
2.43.0




[PATCH v3 03/47] filelock: rename fl_pid variable in lock_get_status

2024-01-31 Thread Jeff Layton
In later patches we're going to introduce some macros that will clash
with the variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index cc7c117ee192..1eceaa56e47f 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2695,11 +2695,11 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
loff_t id, char *pfx, int repeat)
 {
struct inode *inode = NULL;
-   unsigned int fl_pid;
+   unsigned int pid;
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
int type;
 
-   fl_pid = locks_translate_pid(fl, proc_pidns);
+   pid = locks_translate_pid(fl, proc_pidns);
/*
 * If lock owner is dead (and pid is freed) or not visible in current
 * pidns, zero is shown as a pid value. Check lock info from
@@ -2747,11 +2747,11 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
 (type == F_RDLCK) ? "READ" : "UNLCK");
if (inode) {
/* userspace relies on this representation of dev_t */
-   seq_printf(f, "%d %02x:%02x:%lu ", fl_pid,
+   seq_printf(f, "%d %02x:%02x:%lu ", pid,
MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino);
} else {
-   seq_printf(f, "%d :0 ", fl_pid);
+   seq_printf(f, "%d :0 ", pid);
}
if (IS_POSIX(fl)) {
if (fl->fl_end == OFFSET_MAX)

-- 
2.43.0




[PATCH v3 02/47] filelock: rename some fields in tracepoints

2024-01-31 Thread Jeff Layton
In later patches we're going to introduce some macros with names that
clash with fields here. To prevent problems building, just rename the
fields in the trace entry structures.

Signed-off-by: Jeff Layton 
---
 include/trace/events/filelock.h | 76 -
 1 file changed, 38 insertions(+), 38 deletions(-)

diff --git a/include/trace/events/filelock.h b/include/trace/events/filelock.h
index 1646dadd7f37..8fb1d41b1c67 100644
--- a/include/trace/events/filelock.h
+++ b/include/trace/events/filelock.h
@@ -68,11 +68,11 @@ DECLARE_EVENT_CLASS(filelock_lock,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, fl_blocker)
-   __field(fl_owner_t, fl_owner)
-   __field(unsigned int, fl_pid)
-   __field(unsigned int, fl_flags)
-   __field(unsigned char, fl_type)
+   __field(struct file_lock *, blocker)
+   __field(fl_owner_t, owner)
+   __field(unsigned int, pid)
+   __field(unsigned int, flags)
+   __field(unsigned char, type)
__field(loff_t, fl_start)
__field(loff_t, fl_end)
__field(int, ret)
@@ -82,11 +82,11 @@ DECLARE_EVENT_CLASS(filelock_lock,
__entry->fl = fl ? fl : NULL;
__entry->s_dev = inode->i_sb->s_dev;
__entry->i_ino = inode->i_ino;
-   __entry->fl_blocker = fl ? fl->fl_blocker : NULL;
-   __entry->fl_owner = fl ? fl->fl_owner : NULL;
-   __entry->fl_pid = fl ? fl->fl_pid : 0;
-   __entry->fl_flags = fl ? fl->fl_flags : 0;
-   __entry->fl_type = fl ? fl->fl_type : 0;
+   __entry->blocker = fl ? fl->fl_blocker : NULL;
+   __entry->owner = fl ? fl->fl_owner : NULL;
+   __entry->pid = fl ? fl->fl_pid : 0;
+   __entry->flags = fl ? fl->fl_flags : 0;
+   __entry->type = fl ? fl->fl_type : 0;
__entry->fl_start = fl ? fl->fl_start : 0;
__entry->fl_end = fl ? fl->fl_end : 0;
__entry->ret = ret;
@@ -94,9 +94,9 @@ DECLARE_EVENT_CLASS(filelock_lock,
 
TP_printk("fl=%p dev=0x%x:0x%x ino=0x%lx fl_blocker=%p fl_owner=%p 
fl_pid=%u fl_flags=%s fl_type=%s fl_start=%lld fl_end=%lld ret=%d",
__entry->fl, MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
-   __entry->i_ino, __entry->fl_blocker, __entry->fl_owner,
-   __entry->fl_pid, show_fl_flags(__entry->fl_flags),
-   show_fl_type(__entry->fl_type),
+   __entry->i_ino, __entry->blocker, __entry->owner,
+   __entry->pid, show_fl_flags(__entry->flags),
+   show_fl_type(__entry->type),
__entry->fl_start, __entry->fl_end, __entry->ret)
 );
 
@@ -125,32 +125,32 @@ DECLARE_EVENT_CLASS(filelock_lease,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, fl_blocker)
-   __field(fl_owner_t, fl_owner)
-   __field(unsigned int, fl_flags)
-   __field(unsigned char, fl_type)
-   __field(unsigned long, fl_break_time)
-   __field(unsigned long, fl_downgrade_time)
+   __field(struct file_lock *, blocker)
+   __field(fl_owner_t, owner)
+   __field(unsigned int, flags)
+   __field(unsigned char, type)
+   __field(unsigned long, break_time)
+   __field(unsigned long, downgrade_time)
),
 
TP_fast_assign(
__entry->fl = fl ? fl : NULL;
__entry->s_dev = inode->i_sb->s_dev;
__entry->i_ino = inode->i_ino;
-   __entry->fl_blocker = fl ? fl->fl_blocker : NULL;
-   __entry->fl_owner = fl ? fl->fl_owner : NULL;
-   __entry->fl_flags = fl ? fl->fl_flags : 0;
-   __entry->fl_type = fl ? fl->fl_type : 0;
-   __entry->fl_break_time = fl ? fl->fl_break_time : 0;
-   __entry->fl_downgrade_time = fl ? fl->fl_downgrade_time : 0;
+   __entry->blocker = fl ? fl->fl_blocker : NULL;
+   __entry->owner = fl ? fl->fl_owner : NULL;
+   __entry->flags = fl ? fl->fl_flags : 0;
+   __entry->type = fl ? fl->fl_type : 0;
+   __entry->break_time = fl ? fl->fl_break_time : 0;
+   __entry->downgrade_time = fl ? fl->fl_downgrade_time : 0;
),
 
TP_printk("fl=%p dev=0x%x:

[PATCH v3 01/47] filelock: fl_pid field should be signed int

2024-01-31 Thread Jeff Layton
This field has been unsigned for a very long time, but most users of the
struct file_lock and the file locking internals themselves treat it as a
signed value. Change it to be pid_t (which is a signed int).

Signed-off-by: Jeff Layton 
---
 include/linux/filelock.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 95e868e09e29..085ff6ba0653 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -98,7 +98,7 @@ struct file_lock {
fl_owner_t fl_owner;
unsigned int fl_flags;
unsigned char fl_type;
-   unsigned int fl_pid;
+   pid_t fl_pid;
int fl_link_cpu;/* what cpu's list is this on? */
wait_queue_head_t fl_wait;
struct file *fl_file;

-- 
2.43.0




[PATCH v3 00/47] filelock: split file leases out of struct file_lock

2024-01-31 Thread Jeff Layton
I'm not sure this is much prettier than the last, but contracting
"fl_core" to "c", as Neil suggested is a bit easier on the eyes.

I also added a few small helpers and converted several users over to
them. That reduces the size of the per-fs conversion patches later in
the series. I played with some others too, but they were too awkward
or not frequently used enough to make it worthwhile.

Many thanks to Chuck and Neil for the earlier R-b's and comments. I've
dropped those for now since this set is a bit different from the last.

I'd like to get this into linux-next soon and we can see about merging
it for v6.9, unless anyone has major objections.

Thanks!

Signed-off-by: Jeff Layton 
---
Changes in v3:
- Rename "flc_core" fields in file_lock and file_lease to "c"
- new helpers: locks_wake_up, for_each_file_lock, and 
lock_is_{unlock,read,write}
- Link to v2: 
https://lore.kernel.org/r/20240125-flsplit-v2-0-7485322b6...@kernel.org

Changes in v2:
- renamed file_lock_core fields to have "flc_" prefix
- used macros to more easily do the change piecemeal
- broke up patches into per-subsystem ones
- Link to v1: 
https://lore.kernel.org/r/20240116-flsplit-v1-0-c9d0f4370...@kernel.org

---
Jeff Layton (47):
  filelock: fl_pid field should be signed int
  filelock: rename some fields in tracepoints
  filelock: rename fl_pid variable in lock_get_status
  filelock: add some new helper functions
  9p: rename fl_type variable in v9fs_file_do_lock
  afs: convert to using new filelock helpers
  ceph: convert to using new filelock helpers
  dlm: convert to using new filelock helpers
  gfs2: convert to using new filelock helpers
  lockd: convert to using new filelock helpers
  nfs: convert to using new filelock helpers
  nfsd: convert to using new filelock helpers
  ocfs2: convert to using new filelock helpers
  smb/client: convert to using new filelock helpers
  smb/server: convert to using new filelock helpers
  filelock: drop the IS_* macros
  filelock: split common fields into struct file_lock_core
  filelock: have fs/locks.c deal with file_lock_core directly
  filelock: convert more internal functions to use file_lock_core
  filelock: make posix_same_owner take file_lock_core pointers
  filelock: convert posix_owner_key to take file_lock_core arg
  filelock: make locks_{insert,delete}_global_locks take file_lock_core arg
  filelock: convert locks_{insert,delete}_global_blocked
  filelock: make __locks_delete_block and __locks_wake_up_blocks take 
file_lock_core
  filelock: convert __locks_insert_block, conflict and deadlock checks to 
use file_lock_core
  filelock: convert fl_blocker to file_lock_core
  filelock: clean up locks_delete_block internals
  filelock: reorganize locks_delete_block and __locks_insert_block
  filelock: make assign_type helper take a file_lock_core pointer
  filelock: convert locks_wake_up_blocks to take a file_lock_core pointer
  filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx
  filelock: convert locks_translate_pid to take file_lock_core
  filelock: convert seqfile handling to use file_lock_core
  9p: adapt to breakup of struct file_lock
  afs: adapt to breakup of struct file_lock
  ceph: adapt to breakup of struct file_lock
  dlm: adapt to breakup of struct file_lock
  gfs2: adapt to breakup of struct file_lock
  fuse: adapt to breakup of struct file_lock
  lockd: adapt to breakup of struct file_lock
  nfs: adapt to breakup of struct file_lock
  nfsd: adapt to breakup of struct file_lock
  ocfs2: adapt to breakup of struct file_lock
  smb/client: adapt to breakup of struct file_lock
  smb/server: adapt to breakup of struct file_lock
  filelock: remove temporary compatibility macros
  filelock: split leases out of struct file_lock

 fs/9p/vfs_file.c|  40 +-
 fs/afs/flock.c  |  60 +--
 fs/ceph/locks.c |  74 ++--
 fs/dlm/plock.c  |  44 +--
 fs/fuse/file.c  |  14 +-
 fs/gfs2/file.c  |  16 +-
 fs/libfs.c  |   2 +-
 fs/lockd/clnt4xdr.c |  14 +-
 fs/lockd/clntlock.c |   2 +-
 fs/lockd/clntproc.c |  65 +--
 fs/lockd/clntxdr.c  |  14 +-
 fs/lockd/svc4proc.c |  10 +-
 fs/lockd/svclock.c  |  64 +--
 fs/lockd/svcproc.c  |  10 +-
 fs/lockd/svcsubs.c  |  24 +-
 fs/lockd/xdr.c  |  14 +-
 fs/lockd/xdr4.c |  14 +-
 fs/locks.c  | 851 ++--
 fs/nfs/delegation.c |   4 +-
 fs/nfs/file.c   |  22 +-
 fs/nfs/nfs3proc.c   |   2 +-
 fs/nfs/nfs4_fs.h|   2 +-
 fs/nfs/nfs4file.c   |   2 +-
 fs/nfs/nfs

Re: [PATCH v2 00/41] filelock: split struct file_lock into file_lock and file_lease structs

2024-01-25 Thread Jeff Layton
On Fri, 2024-01-26 at 09:34 +1100, NeilBrown wrote:
> On Fri, 26 Jan 2024, Chuck Lever wrote:
> > On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> > > Long ago, file locks used to hang off of a singly-linked list in struct
> > > inode. Because of this, when leases were added, they were added to the
> > > same list and so they had to be tracked using the same sort of
> > > structure.
> > > 
> > > Several years ago, we added struct file_lock_context, which allowed us
> > > to use separate lists to track different types of file locks. Given
> > > that, leases no longer need to be tracked using struct file_lock.
> > > 
> > > That said, a lot of the underlying infrastructure _is_ the same between
> > > file leases and locks, so we can't completely separate everything.
> > > 
> > > This patchset first splits a group of fields used by both file locks and
> > > leases into a new struct file_lock_core, that is then embedded in struct
> > > file_lock. Coccinelle was then used to convert a lot of the callers to
> > > deal with the move, with the remaining 25% or so converted by hand.
> > > 
> > > It then converts several internal functions in fs/locks.c to work
> > > with struct file_lock_core. Lastly, struct file_lock is split into
> > > struct file_lock and file_lease, and the lease-related APIs converted to
> > > take struct file_lease.
> > > 
> > > After the first few patches (which I left split up for easier review),
> > > the set should be bisectable. I'll plan to squash the first few
> > > together to make sure the resulting set is bisectable before merge.
> > > 
> > > Finally, I left the coccinelle scripts I used in tree. I had heard it
> > > was preferable to merge those along with the patches that they
> > > generate, but I wasn't sure where they go. I can either move those to a
> > > more appropriate location or we can just drop that commit if it's not
> > > needed.
> > > 
> > > Signed-off-by: Jeff Layton 
> > 
> > v2 looks nicer.
> > 
> > I would add a few list handling primitives, as I see enough
> > instances of list_for_each_entry, list_for_each_entry_safe,
> > list_first_entry, and list_first_entry_or_null on fl_core.flc_list
> > to make it worth having those.
> > 
> > Also, there doesn't seem to be benefit for API consumers to have to
> > understand the internal structure of struct file_lock/lease to reach
> > into fl_core. Having accessor functions for common fields like
> > fl_type and fl_flags could be cleaner.
> 
> I'm not a big fan of accessor functions.  They don't *look* like normal
> field access, so a casual reader has to go find out what the function
> does, just to find the it doesn't really do anything.

I might have been a bit too hasty with the idea. I took a look earlier
today and it gets pretty ugly trying to handle these fields with
accessors. flc_flags, for instance will need both a get and a set
method, which gets wordy after a while.

Some of the flc_list accesses don't involve list walks either so I don't
think we'll ever be able to make this "neat" without a ton of one-off
accessors.

> But neither am I a fan have requiring filesystems to use
> "fl_core.flc_foo".  As you say, reaching into fl_core isn't ideal.
> 

I too think it's ugly.

> It would be nice if we could make fl_core and anonymous structure, but
> that really requires -fplan9-extensions which Linus is on-record as not
> liking.
> Unless...
> 
> How horrible would it be to use
> 
>union {
>struct file_lock_core flc_core;
>struct file_lock_core;
>};
> 
> I think that only requires -fms-extensions, which Linus was less
> negative towards.  That would allow access to the members of
> file_lock_core without the "flc_core." prefix, but would still allow
> getting the address of 'flc_core'.
> Maybe it's too ugly.
> 

I'd rather not rely on special compiler flags.

> While fl_type and fl_flags are most common, fl_pid, fl_owner, fl_file
> and even fl_wait are also used.  Having accessor functions for all of those
> would be too much I think.
> 

Some of them need setters too, and some like fl_flags like to be able to
do this:

fl->fl_flags |= FL_SLEEP;

That's hard to deal with in an accessor unless you want to do it with
macros or something.

> Maybe higher-level functions which meet the real need of the filesystem
> might be a useful approach:
> 
>  locks_wakeup(lock)
>  locks_wait_interruptible(lock, condition)
>  locks_posix_init(lock, type, pid, ...) ??
>  locks_is_u

Re: [PATCH v2 00/41] filelock: split struct file_lock into file_lock and file_lease structs

2024-01-25 Thread Jeff Layton
On Thu, 2024-01-25 at 09:57 -0500, Chuck Lever wrote:
> On Thu, Jan 25, 2024 at 05:42:41AM -0500, Jeff Layton wrote:
> > Long ago, file locks used to hang off of a singly-linked list in struct
> > inode. Because of this, when leases were added, they were added to the
> > same list and so they had to be tracked using the same sort of
> > structure.
> > 
> > Several years ago, we added struct file_lock_context, which allowed us
> > to use separate lists to track different types of file locks. Given
> > that, leases no longer need to be tracked using struct file_lock.
> > 
> > That said, a lot of the underlying infrastructure _is_ the same between
> > file leases and locks, so we can't completely separate everything.
> > 
> > This patchset first splits a group of fields used by both file locks and
> > leases into a new struct file_lock_core, that is then embedded in struct
> > file_lock. Coccinelle was then used to convert a lot of the callers to
> > deal with the move, with the remaining 25% or so converted by hand.
> > 
> > It then converts several internal functions in fs/locks.c to work
> > with struct file_lock_core. Lastly, struct file_lock is split into
> > struct file_lock and file_lease, and the lease-related APIs converted to
> > take struct file_lease.
> > 
> > After the first few patches (which I left split up for easier review),
> > the set should be bisectable. I'll plan to squash the first few
> > together to make sure the resulting set is bisectable before merge.
> > 
> > Finally, I left the coccinelle scripts I used in tree. I had heard it
> > was preferable to merge those along with the patches that they
> > generate, but I wasn't sure where they go. I can either move those to a
> > more appropriate location or we can just drop that commit if it's not
> > needed.
> > 
> > Signed-off-by: Jeff Layton 
> 
> v2 looks nicer.
> 
> I would add a few list handling primitives, as I see enough
> instances of list_for_each_entry, list_for_each_entry_safe,
> list_first_entry, and list_first_entry_or_null on fl_core.flc_list
> to make it worth having those.
> 
> Also, there doesn't seem to be benefit for API consumers to have to
> understand the internal structure of struct file_lock/lease to reach
> into fl_core. Having accessor functions for common fields like
> fl_type and fl_flags could be cleaner.
> 

That is a good suggestion. I had considered it before and figured "why
bother", but I think that would make things simpler.

I'll plan to do a v3 that has more helpers. Possibly we can just convert
some of the subsystems ahead of time and avoid some churn. Stay tuned...

> For the series:
> 
> Reviewed-by: Chuck Lever 
> 
> For the nfsd and lockd parts:
> 
> Acked-by: Chuck Lever 
> 
> 
> > ---
> > Changes in v2:
> > - renamed file_lock_core fields to have "flc_" prefix
> > - used macros to more easily do the change piecemeal
> > - broke up patches into per-subsystem ones
> > - Link to v1: 
> > https://lore.kernel.org/r/20240116-flsplit-v1-0-c9d0f4370...@kernel.org
> > 
> > ---
> > Jeff Layton (41):
> >   filelock: rename some fields in tracepoints
> >   filelock: rename fl_pid variable in lock_get_status
> >   dlm: rename fl_flags variable in dlm_posix_unlock
> >   nfs: rename fl_flags variable in nfs4_proc_unlck
> >   nfsd: rename fl_type and fl_flags variables in nfsd4_lock
> >   lockd: rename fl_flags and fl_type variables in nlmclnt_lock
> >   9p: rename fl_type variable in v9fs_file_do_lock
> >   afs: rename fl_type variable in afs_next_locker
> >   filelock: drop the IS_* macros
> >   filelock: split common fields into struct file_lock_core
> >   filelock: add coccinelle scripts to move fields to struct 
> > file_lock_core
> >   filelock: have fs/locks.c deal with file_lock_core directly
> >   filelock: convert some internal functions to use file_lock_core 
> > instead
> >   filelock: convert more internal functions to use file_lock_core
> >   filelock: make posix_same_owner take file_lock_core pointers
> >   filelock: convert posix_owner_key to take file_lock_core arg
> >   filelock: make locks_{insert,delete}_global_locks take file_lock_core 
> > arg
> >   filelock: convert locks_{insert,delete}_global_blocked
> >   filelock: make __locks_delete_block and __locks_wake_up_blocks take 
> > file_lock_core
> >   filelock: convert __locks_insert_block, conflict and deadlock checks 
> > to use file_lock_core
> >   fil

[PATCH v2 29/41] 9p: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/9p/vfs_file.c | 39 +++
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index a1dabcf73380..4e4f555e0c8b 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -9,7 +9,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -108,7 +107,7 @@ static int v9fs_file_lock(struct file *filp, int cmd, 
struct file_lock *fl)
 
p9_debug(P9_DEBUG_VFS, "filp: %p lock: %p\n", filp, fl);
 
-   if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_type != F_UNLCK) {
+   if ((IS_SETLK(cmd) || IS_SETLKW(cmd)) && fl->fl_core.flc_type != 
F_UNLCK) {
filemap_write_and_wait(inode->i_mapping);
invalidate_mapping_pages(>i_data, 0, -1);
}
@@ -127,7 +126,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
fid = filp->private_data;
BUG_ON(fid == NULL);
 
-   BUG_ON((fl->fl_flags & FL_POSIX) != FL_POSIX);
+   BUG_ON((fl->fl_core.flc_flags & FL_POSIX) != FL_POSIX);
 
res = locks_lock_file_wait(filp, fl);
if (res < 0)
@@ -136,7 +135,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
/* convert posix lock to p9 tlock args */
memset(, 0, sizeof(flock));
/* map the lock type */
-   switch (fl->fl_type) {
+   switch (fl->fl_core.flc_type) {
case F_RDLCK:
flock.type = P9_LOCK_TYPE_RDLCK;
break;
@@ -152,7 +151,7 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
flock.length = 0;
else
flock.length = fl->fl_end - fl->fl_start + 1;
-   flock.proc_id = fl->fl_pid;
+   flock.proc_id = fl->fl_core.flc_pid;
flock.client_id = fid->clnt->name;
if (IS_SETLKW(cmd))
flock.flags = P9_LOCK_FLAGS_BLOCK;
@@ -207,13 +206,13 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
 * incase server returned error for lock request, revert
 * it locally
 */
-   if (res < 0 && fl->fl_type != F_UNLCK) {
-   unsigned char type = fl->fl_type;
+   if (res < 0 && fl->fl_core.flc_type != F_UNLCK) {
+   unsigned char type = fl->fl_core.flc_type;
 
-   fl->fl_type = F_UNLCK;
+   fl->fl_core.flc_type = F_UNLCK;
/* Even if this fails we want to return the remote error */
locks_lock_file_wait(filp, fl);
-   fl->fl_type = type;
+   fl->fl_core.flc_type = type;
}
if (flock.client_id != fid->clnt->name)
kfree(flock.client_id);
@@ -235,7 +234,7 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
 * if we have a conflicting lock locally, no need to validate
 * with server
 */
-   if (fl->fl_type != F_UNLCK)
+   if (fl->fl_core.flc_type != F_UNLCK)
return res;
 
/* convert posix lock to p9 tgetlock args */
@@ -246,7 +245,7 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
glock.length = 0;
else
glock.length = fl->fl_end - fl->fl_start + 1;
-   glock.proc_id = fl->fl_pid;
+   glock.proc_id = fl->fl_core.flc_pid;
glock.client_id = fid->clnt->name;
 
res = p9_client_getlock_dotl(fid, );
@@ -255,13 +254,13 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
/* map 9p lock type to os lock type */
switch (glock.type) {
case P9_LOCK_TYPE_RDLCK:
-   fl->fl_type = F_RDLCK;
+   fl->fl_core.flc_type = F_RDLCK;
break;
case P9_LOCK_TYPE_WRLCK:
-   fl->fl_type = F_WRLCK;
+   fl->fl_core.flc_type = F_WRLCK;
break;
case P9_LOCK_TYPE_UNLCK:
-   fl->fl_type = F_UNLCK;
+   fl->fl_core.flc_type = F_UNLCK;
break;
}
if (glock.type != P9_LOCK_TYPE_UNLCK) {
@@ -270,7 +269,7 @@ static int v9fs_file_getlock(struct file *filp, struct 
file_lock *fl)
fl->fl_end = OFFSET_MAX;
else
fl->fl_end = glock.start + glock.length - 1;
-   fl->fl_pid = -glock.proc_id;
+   fl->fl_core.flc_pid = -glock.proc_id;
}
 out:
if (glock.client_id != fid->clnt->name)
@@ -294,7 +293,7 @@ static int v9fs_file_lock_dotl(struct f

[PATCH v2 28/41] filelock: convert seqfile handling to use file_lock_core

2024-01-25 Thread Jeff Layton
Reduce some pointer manipulation by just using file_lock_core where we
can and only translate to a file_lock when needed.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 71 +++---
 1 file changed, 36 insertions(+), 35 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index e8afdd084245..de93d38da2f9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2718,52 +2718,54 @@ struct locks_iterator {
loff_t  li_pos;
 };
 
-static void lock_get_status(struct seq_file *f, struct file_lock *fl,
+static void lock_get_status(struct seq_file *f, struct file_lock_core *flc,
loff_t id, char *pfx, int repeat)
 {
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
-   int type = fl->fl_core.flc_type;
+   int type = flc->flc_type;
+   struct file_lock *fl = file_lock(flc);
+
+   pid = locks_translate_pid(flc, proc_pidns);
 
-   pid = locks_translate_pid(>fl_core, proc_pidns);
/*
 * If lock owner is dead (and pid is freed) or not visible in current
 * pidns, zero is shown as a pid value. Check lock info from
 * init_pid_ns to get saved lock pid value.
 */
 
-   if (fl->fl_core.flc_file != NULL)
-   inode = file_inode(fl->fl_core.flc_file);
+   if (flc->flc_file != NULL)
+   inode = file_inode(flc->flc_file);
 
seq_printf(f, "%lld: ", id);
 
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
 
-   if (fl->fl_core.flc_flags & FL_POSIX) {
-   if (fl->fl_core.flc_flags & FL_ACCESS)
+   if (flc->flc_flags & FL_POSIX) {
+   if (flc->flc_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
-   else if (fl->fl_core.flc_flags & FL_OFDLCK)
+   else if (flc->flc_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
 
seq_printf(f, " %s ",
 (inode == NULL) ? "*NOINODE*" : "ADVISORY ");
-   } else if (fl->fl_core.flc_flags & FL_FLOCK) {
+   } else if (flc->flc_flags & FL_FLOCK) {
seq_puts(f, "FLOCK  ADVISORY  ");
-   } else if (fl->fl_core.flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+   } else if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
type = target_leasetype(fl);
 
-   if (fl->fl_core.flc_flags & FL_DELEG)
+   if (flc->flc_flags & FL_DELEG)
seq_puts(f, "DELEG  ");
else
seq_puts(f, "LEASE  ");
 
if (lease_breaking(fl))
seq_puts(f, "BREAKING  ");
-   else if (fl->fl_core.flc_file)
+   else if (flc->flc_file)
seq_puts(f, "ACTIVE");
else
seq_puts(f, "BREAKER   ");
@@ -2781,7 +2783,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
} else {
seq_printf(f, "%d :0 ", pid);
}
-   if (fl->fl_core.flc_flags & FL_POSIX) {
+   if (flc->flc_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else
@@ -2791,18 +2793,18 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
}
 }
 
-static struct file_lock *get_next_blocked_member(struct file_lock *node)
+static struct file_lock_core *get_next_blocked_member(struct file_lock_core 
*node)
 {
-   struct file_lock *tmp;
+   struct file_lock_core *tmp;
 
/* NULL node or root node */
-   if (node == NULL || node->fl_core.flc_blocker == NULL)
+   if (node == NULL || node->flc_blocker == NULL)
return NULL;
 
/* Next member in the linked list could be itself */
-   tmp = list_next_entry(node, fl_core.flc_blocked_member);
-   if (list_entry_is_head(tmp, 
>fl_core.flc_blocker->flc_blocked_requests,
-  fl_core.flc_blocked_member)
+   tmp = list_next_entry(node, flc_blocked_member);
+   if (list_entry_is_head(tmp, >flc_blocker->flc_blocked_requests,
+  flc_blocked_member)
|| tmp == node) {
return NULL;
}
@@ -2813,18 +2815,18 @@ static struct file_lock *get_next_blocked_member(struct 
file_lock *node)
 static int locks_show(struct seq_file *f, void *v)
 {
struct locks_ite

[PATCH v2 22/41] filelock: clean up locks_delete_block internals

2024-01-25 Thread Jeff Layton
Rework the internals of locks_delete_block to use struct file_lock_core
(mostly just for clarity's sake). The prototype is not changed.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 15 ---
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 8e320c95c416..739af36d98df 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -697,9 +697,10 @@ static void __locks_wake_up_blocks(struct file_lock_core 
*blocker)
  *
  * lockd/nfsd need to disconnect the lock while working on it.
  */
-int locks_delete_block(struct file_lock *waiter)
+int locks_delete_block(struct file_lock *waiter_fl)
 {
int status = -ENOENT;
+   struct file_lock_core *waiter = _fl->fl_core;
 
/*
 * If fl_blocker is NULL, it won't be set again as this thread "owns"
@@ -722,21 +723,21 @@ int locks_delete_block(struct file_lock *waiter)
 * no new locks can be inserted into its fl_blocked_requests list, and
 * can avoid doing anything further if the list is empty.
 */
-   if (!smp_load_acquire(>fl_core.flc_blocker) &&
-   list_empty(>fl_core.flc_blocked_requests))
+   if (!smp_load_acquire(>flc_blocker) &&
+   list_empty(>flc_blocked_requests))
return status;
 
spin_lock(_lock_lock);
-   if (waiter->fl_core.flc_blocker)
+   if (waiter->flc_blocker)
status = 0;
-   __locks_wake_up_blocks(>fl_core);
-   __locks_delete_block(>fl_core);
+   __locks_wake_up_blocks(waiter);
+   __locks_delete_block(waiter);
 
/*
 * The setting of fl_blocker to NULL marks the "done" point in deleting
 * a block. Paired with acquire at the top of this function.
 */
-   smp_store_release(>fl_core.flc_blocker, NULL);
+   smp_store_release(>flc_blocker, NULL);
spin_unlock(_lock_lock);
return status;
 }

-- 
2.43.0




[PATCH v2 17/41] filelock: make locks_{insert,delete}_global_locks take file_lock_core arg

2024-01-25 Thread Jeff Layton
Convert these functions to take a file_lock_core instead of a file_lock.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index effe84f954f9..ad4bb9cd4c9d 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -596,20 +596,20 @@ static int posix_same_owner(struct file_lock_core *fl1, 
struct file_lock_core *f
 }
 
 /* Must be called with the flc_lock held! */
-static void locks_insert_global_locks(struct file_lock *fl)
+static void locks_insert_global_locks(struct file_lock_core *flc)
 {
struct file_lock_list_struct *fll = this_cpu_ptr(_lock_list);
 
percpu_rwsem_assert_held(_rwsem);
 
spin_lock(>lock);
-   fl->fl_core.flc_link_cpu = smp_processor_id();
-   hlist_add_head(>fl_core.flc_link, >hlist);
+   flc->flc_link_cpu = smp_processor_id();
+   hlist_add_head(>flc_link, >hlist);
spin_unlock(>lock);
 }
 
 /* Must be called with the flc_lock held! */
-static void locks_delete_global_locks(struct file_lock *fl)
+static void locks_delete_global_locks(struct file_lock_core *flc)
 {
struct file_lock_list_struct *fll;
 
@@ -620,12 +620,12 @@ static void locks_delete_global_locks(struct file_lock 
*fl)
 * is done while holding the flc_lock, and new insertions into the list
 * also require that it be held.
 */
-   if (hlist_unhashed(>fl_core.flc_link))
+   if (hlist_unhashed(>flc_link))
return;
 
-   fll = per_cpu_ptr(_lock_list, fl->fl_core.flc_link_cpu);
+   fll = per_cpu_ptr(_lock_list, flc->flc_link_cpu);
spin_lock(>lock);
-   hlist_del_init(>fl_core.flc_link);
+   hlist_del_init(>flc_link);
spin_unlock(>lock);
 }
 
@@ -814,13 +814,13 @@ static void
 locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
 {
list_add_tail(>fl_core.flc_list, before);
-   locks_insert_global_locks(fl);
+   locks_insert_global_locks(>fl_core);
 }
 
 static void
 locks_unlink_lock_ctx(struct file_lock *fl)
 {
-   locks_delete_global_locks(fl);
+   locks_delete_global_locks(>fl_core);
list_del_init(>fl_core.flc_list);
locks_wake_up_blocks(fl);
 }

-- 
2.43.0




[PATCH v2 21/41] filelock: convert fl_blocker to file_lock_core

2024-01-25 Thread Jeff Layton
Both locks and leases deal with fl_blocker. Switch the fl_blocker
pointer in struct file_lock_core to point to the file_lock_core of the
blocker instead of a file_lock structure.

Signed-off-by: Jeff Layton 
---
 fs/locks.c  | 16 
 include/linux/filelock.h|  2 +-
 include/trace/events/filelock.h |  4 ++--
 3 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index a86841fc8220..8e320c95c416 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -400,7 +400,7 @@ static void locks_move_blocks(struct file_lock *new, struct 
file_lock *fl)
 
/*
 * As ctx->flc_lock is held, new requests cannot be added to
-* ->fl_blocked_requests, so we don't need a lock to check if it
+* ->flc_blocked_requests, so we don't need a lock to check if it
 * is empty.
 */
if (list_empty(>fl_core.flc_blocked_requests))
@@ -410,7 +410,7 @@ static void locks_move_blocks(struct file_lock *new, struct 
file_lock *fl)
 >fl_core.flc_blocked_requests);
list_for_each_entry(f, >fl_core.flc_blocked_requests,
fl_core.flc_blocked_member)
-   f->fl_core.flc_blocker = new;
+   f->fl_core.flc_blocker = >fl_core;
spin_unlock(_lock_lock);
 }
 
@@ -773,7 +773,7 @@ static void __locks_insert_block(struct file_lock 
*blocker_fl,
blocker =  flc;
goto new_blocker;
}
-   waiter->flc_blocker = file_lock(blocker);
+   waiter->flc_blocker = blocker;
list_add_tail(>flc_blocked_member,
  >flc_blocked_requests);
 
@@ -996,7 +996,7 @@ static struct file_lock_core 
*what_owner_is_waiting_for(struct file_lock_core *b
hash_for_each_possible(blocked_hash, flc, flc_link, 
posix_owner_key(blocker)) {
if (posix_same_owner(flc, blocker)) {
while (flc->flc_blocker)
-   flc = >flc_blocker->fl_core;
+   flc = flc->flc_blocker;
return flc;
}
}
@@ -2798,9 +2798,9 @@ static struct file_lock *get_next_blocked_member(struct 
file_lock *node)
 
/* Next member in the linked list could be itself */
tmp = list_next_entry(node, fl_core.flc_blocked_member);
-   if (list_entry_is_head(tmp, 
>fl_core.flc_blocker->fl_core.flc_blocked_requests,
-   fl_core.flc_blocked_member)
-   || tmp == node) {
+   if (list_entry_is_head(tmp, 
>fl_core.flc_blocker->flc_blocked_requests,
+  fl_core.flc_blocked_member)
+   || tmp == node) {
return NULL;
}
 
@@ -2841,7 +2841,7 @@ static int locks_show(struct seq_file *f, void *v)
tmp = get_next_blocked_member(cur);
/* Fall back to parent node */
while (tmp == NULL && cur->fl_core.flc_blocker != NULL) 
{
-   cur = cur->fl_core.flc_blocker;
+   cur = file_lock(cur->fl_core.flc_blocker);
level--;
tmp = get_next_blocked_member(cur);
}
diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 0c0db7f20ff6..9ddf27faba94 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -87,7 +87,7 @@ bool opens_in_grace(struct net *);
  */
 
 struct file_lock_core {
-   struct file_lock *flc_blocker;  /* The lock that is blocking us */
+   struct file_lock_core *flc_blocker; /* The lock that is blocking us 
*/
struct list_head flc_list;  /* link into file_lock_context */
struct hlist_node flc_link; /* node in global lists */
struct list_head flc_blocked_requests;  /* list of requests with
diff --git a/include/trace/events/filelock.h b/include/trace/events/filelock.h
index 9efd7205460c..c0b92e888d16 100644
--- a/include/trace/events/filelock.h
+++ b/include/trace/events/filelock.h
@@ -68,7 +68,7 @@ DECLARE_EVENT_CLASS(filelock_lock,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, blocker)
+   __field(struct file_lock_core *, blocker)
__field(fl_owner_t, owner)
__field(unsigned int, pid)
__field(unsigned int, flags)
@@ -125,7 +125,7 @@ DECLARE_EVENT_CLASS(filelock_lease,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, blocker)
+   __field(struct file_lock_core *, blocker)
__field(fl_owne

[PATCH v2 10/41] filelock: split common fields into struct file_lock_core

2024-01-25 Thread Jeff Layton
In a future patch, we're going to split file leases into their own
structure. Since a lot of the underlying machinery uses the same fields
move those into a new file_lock_core, and embed that inside struct
file_lock.

For now, add some macros to ensure that we can continue to build while
the conversion is in progress.

Signed-off-by: Jeff Layton 
---
 fs/9p/vfs_file.c  |  1 +
 fs/afs/internal.h |  1 +
 fs/ceph/locks.c   |  1 +
 fs/dlm/plock.c|  1 +
 fs/gfs2/file.c|  1 +
 fs/lockd/clntproc.c   |  1 +
 fs/locks.c|  1 +
 fs/nfs/file.c |  1 +
 fs/nfs/nfs4_fs.h  |  1 +
 fs/nfs/write.c|  1 +
 fs/nfsd/netns.h   |  1 +
 fs/ocfs2/locks.c  |  1 +
 fs/ocfs2/stack_user.c |  1 +
 fs/open.c |  2 +-
 fs/posix_acl.c|  4 ++--
 fs/smb/client/cifsglob.h  |  1 +
 fs/smb/client/cifssmb.c   |  1 +
 fs/smb/client/file.c  |  3 ++-
 fs/smb/client/smb2file.c  |  1 +
 fs/smb/server/smb2pdu.c   |  1 +
 fs/smb/server/vfs.c   |  1 +
 include/linux/filelock.h  | 47 ++-
 include/linux/lockd/xdr.h |  3 ++-
 23 files changed, 59 insertions(+), 18 deletions(-)

diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index 3df8aa1b5996..a1dabcf73380 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/afs/internal.h b/fs/afs/internal.h
index 9c03fcf7ffaa..f5dd428e40f4 100644
--- a/fs/afs/internal.h
+++ b/fs/afs/internal.h
@@ -9,6 +9,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index e07ad29ff8b9..ccb358c398ca 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -7,6 +7,7 @@
 
 #include "super.h"
 #include "mds_client.h"
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index 1b66b2d2b801..b89dca1d51b0 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -4,6 +4,7 @@
  */
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 992ca4effb50..9e7cd054e924 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -15,6 +15,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index cc596748e359..1f71260603b7 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/locks.c b/fs/locks.c
index 87212f86eca9..cee3f183a872 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -48,6 +48,7 @@
  * children.
  *
  */
+#define _NEED_FILE_LOCK_FIELD_MACROS
 
 #include 
 #include 
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 8577ccf621f5..3c9a8ad91540 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -31,6 +31,7 @@
 #include 
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include "delegation.h"
diff --git a/fs/nfs/nfs4_fs.h b/fs/nfs/nfs4_fs.h
index 581698f1b7b2..752224a48f1c 100644
--- a/fs/nfs/nfs4_fs.h
+++ b/fs/nfs/nfs4_fs.h
@@ -23,6 +23,7 @@
 #define NFS4_MAX_LOOP_ON_RECOVER (10)
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 struct idmap;
diff --git a/fs/nfs/write.c b/fs/nfs/write.c
index bb79d3a886ae..ed837a3675cf 100644
--- a/fs/nfs/write.c
+++ b/fs/nfs/write.c
@@ -25,6 +25,7 @@
 #include 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include 
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index 74b4360779a1..fd91125208be 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -10,6 +10,7 @@
 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
index f37174e79fad..8a9970dc852e 100644
--- a/fs/ocfs2/locks.c
+++ b/fs/ocfs2/locks.c
@@ -8,6 +8,7 @@
  */
 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index 9b76ee66aeb2..460c882c5384 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -9,6 +9,7 @@
 
 #include 
 #include 
+#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/open.c b/fs/open.c
index a84d21e55c39..0a73afe04d34 100644
--- a/fs/open.c
+++ b/fs/open.c
@@ -1364,7 +1364,7 @@ struct file *filp_open(const char *filename, int flags, 
umode_t mode)
 {
struct filename *name = getname_kernel(filename);
struct file *file = ERR_CAST(name);
-   
+
if (!IS_ERR(name)) {
file = file_open_name(name, flags, mode);
putname(name);
diff --git a/fs/posix_acl.c b/fs/posix_acl.c
index e1af20893ebe

[PATCH v2 27/41] filelock: convert locks_translate_pid to take file_lock_core

2024-01-25 Thread Jeff Layton
Signed-off-by: Jeff Layton 
---
 fs/locks.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 0491d621417d..e8afdd084245 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2169,17 +2169,17 @@ EXPORT_SYMBOL_GPL(vfs_test_lock);
  *
  * Used to translate a fl_pid into a namespace virtual pid number
  */
-static pid_t locks_translate_pid(struct file_lock *fl, struct pid_namespace 
*ns)
+static pid_t locks_translate_pid(struct file_lock_core *fl, struct 
pid_namespace *ns)
 {
pid_t vnr;
struct pid *pid;
 
-   if (fl->fl_core.flc_flags & FL_OFDLCK)
+   if (fl->flc_flags & FL_OFDLCK)
return -1;
 
/* Remote locks report a negative pid value */
-   if (fl->fl_core.flc_pid <= 0)
-   return fl->fl_core.flc_pid;
+   if (fl->flc_pid <= 0)
+   return fl->flc_pid;
 
/*
 * If the flock owner process is dead and its pid has been already
@@ -2187,10 +2187,10 @@ static pid_t locks_translate_pid(struct file_lock *fl, 
struct pid_namespace *ns)
 * flock owner pid number in init pidns.
 */
if (ns == _pid_ns)
-   return (pid_t) fl->fl_core.flc_pid;
+   return (pid_t) fl->flc_pid;
 
rcu_read_lock();
-   pid = find_pid_ns(fl->fl_core.flc_pid, _pid_ns);
+   pid = find_pid_ns(fl->flc_pid, _pid_ns);
vnr = pid_nr_ns(pid, ns);
rcu_read_unlock();
return vnr;
@@ -2198,7 +2198,7 @@ static pid_t locks_translate_pid(struct file_lock *fl, 
struct pid_namespace *ns)
 
 static int posix_lock_to_flock(struct flock *flock, struct file_lock *fl)
 {
-   flock->l_pid = locks_translate_pid(fl, task_active_pid_ns(current));
+   flock->l_pid = locks_translate_pid(>fl_core, 
task_active_pid_ns(current));
 #if BITS_PER_LONG == 32
/*
 * Make sure we can represent the posix lock via
@@ -2220,7 +2220,7 @@ static int posix_lock_to_flock(struct flock *flock, 
struct file_lock *fl)
 #if BITS_PER_LONG == 32
 static void posix_lock_to_flock64(struct flock64 *flock, struct file_lock *fl)
 {
-   flock->l_pid = locks_translate_pid(fl, task_active_pid_ns(current));
+   flock->l_pid = locks_translate_pid(>fl_core, 
task_active_pid_ns(current));
flock->l_start = fl->fl_start;
flock->l_len = fl->fl_end == OFFSET_MAX ? 0 :
fl->fl_end - fl->fl_start + 1;
@@ -2726,7 +2726,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
int type = fl->fl_core.flc_type;
 
-   pid = locks_translate_pid(fl, proc_pidns);
+   pid = locks_translate_pid(>fl_core, proc_pidns);
/*
 * If lock owner is dead (and pid is freed) or not visible in current
 * pidns, zero is shown as a pid value. Check lock info from
@@ -2819,7 +2819,7 @@ static int locks_show(struct seq_file *f, void *v)
 
cur = hlist_entry(v, struct file_lock, fl_core.flc_link);
 
-   if (locks_translate_pid(cur, proc_pidns) == 0)
+   if (locks_translate_pid(>fl_core, proc_pidns) == 0)
return 0;
 
/* View this crossed linked list as a binary tree, the first member of 
fl_blocked_requests

-- 
2.43.0




[PATCH v2 41/41] filelock: split leases out of struct file_lock

2024-01-25 Thread Jeff Layton
Add a new struct file_lease and move the lease-specific fields from
struct file_lock to it. Convert the appropriate API calls to take
struct file_lease instead, and convert the callers to use them.

There is zero overlap between the lock manager operations for file
locks and the ones for file leases, so split the lease-related
operations off into a new lease_manager_operations struct.

Signed-off-by: Jeff Layton 
---
 fs/libfs.c  |   2 +-
 fs/locks.c  | 119 ++--
 fs/nfs/nfs4_fs.h|   2 +-
 fs/nfs/nfs4file.c   |   2 +-
 fs/nfs/nfs4proc.c   |   4 +-
 fs/nfsd/nfs4layouts.c   |  17 +++---
 fs/nfsd/nfs4state.c |  21 ---
 fs/smb/client/cifsfs.c  |   2 +-
 include/linux/filelock.h|  49 +++--
 include/linux/fs.h  |   5 +-
 include/trace/events/filelock.h |  18 +++---
 11 files changed, 147 insertions(+), 94 deletions(-)

diff --git a/fs/libfs.c b/fs/libfs.c
index eec6031b0155..8b67cb4655d5 100644
--- a/fs/libfs.c
+++ b/fs/libfs.c
@@ -1580,7 +1580,7 @@ EXPORT_SYMBOL(alloc_anon_inode);
  * All arguments are ignored and it just returns -EINVAL.
  */
 int
-simple_nosetlease(struct file *filp, int arg, struct file_lock **flp,
+simple_nosetlease(struct file *filp, int arg, struct file_lease **flp,
  void **priv)
 {
return -EINVAL;
diff --git a/fs/locks.c b/fs/locks.c
index de93d38da2f9..c6c2b2e173fb 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -74,12 +74,17 @@ static struct file_lock *file_lock(struct file_lock_core 
*flc)
return container_of(flc, struct file_lock, fl_core);
 }
 
-static bool lease_breaking(struct file_lock *fl)
+static struct file_lease *file_lease(struct file_lock_core *flc)
+{
+   return container_of(flc, struct file_lease, fl_core);
+}
+
+static bool lease_breaking(struct file_lease *fl)
 {
return fl->fl_core.flc_flags & (FL_UNLOCK_PENDING | 
FL_DOWNGRADE_PENDING);
 }
 
-static int target_leasetype(struct file_lock *fl)
+static int target_leasetype(struct file_lease *fl)
 {
if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
return F_UNLCK;
@@ -166,6 +171,7 @@ static DEFINE_SPINLOCK(blocked_lock_lock);
 
 static struct kmem_cache *flctx_cache __ro_after_init;
 static struct kmem_cache *filelock_cache __ro_after_init;
+static struct kmem_cache *filelease_cache __ro_after_init;
 
 static struct file_lock_context *
 locks_get_lock_context(struct inode *inode, int type)
@@ -275,6 +281,18 @@ struct file_lock *locks_alloc_lock(void)
 }
 EXPORT_SYMBOL_GPL(locks_alloc_lock);
 
+/* Allocate an empty lock structure. */
+struct file_lease *locks_alloc_lease(void)
+{
+   struct file_lease *fl = kmem_cache_zalloc(filelease_cache, GFP_KERNEL);
+
+   if (fl)
+   locks_init_lock_heads(>fl_core);
+
+   return fl;
+}
+EXPORT_SYMBOL_GPL(locks_alloc_lease);
+
 void locks_release_private(struct file_lock *fl)
 {
struct file_lock_core *flc = >fl_core;
@@ -336,15 +354,25 @@ void locks_free_lock(struct file_lock *fl)
 }
 EXPORT_SYMBOL(locks_free_lock);
 
+/* Free a lease which is not in use. */
+void locks_free_lease(struct file_lease *fl)
+{
+   kmem_cache_free(filelease_cache, fl);
+}
+EXPORT_SYMBOL(locks_free_lease);
+
 static void
 locks_dispose_list(struct list_head *dispose)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
 
while (!list_empty(dispose)) {
-   fl = list_first_entry(dispose, struct file_lock, 
fl_core.flc_list);
-   list_del_init(>fl_core.flc_list);
-   locks_free_lock(fl);
+   flc = list_first_entry(dispose, struct file_lock_core, 
flc_list);
+   list_del_init(>flc_list);
+   if (flc->flc_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
+   locks_free_lease(file_lease(flc));
+   else
+   locks_free_lock(file_lock(flc));
}
 }
 
@@ -355,6 +383,13 @@ void locks_init_lock(struct file_lock *fl)
 }
 EXPORT_SYMBOL(locks_init_lock);
 
+void locks_init_lease(struct file_lease *fl)
+{
+   memset(fl, 0, sizeof(*fl));
+   locks_init_lock_heads(>fl_core);
+}
+EXPORT_SYMBOL(locks_init_lease);
+
 /*
  * Initialize a new lock from an existing file_lock structure.
  */
@@ -518,14 +553,14 @@ static int flock_to_posix_lock(struct file *filp, struct 
file_lock *fl,
 
 /* default lease lock manager operations */
 static bool
-lease_break_callback(struct file_lock *fl)
+lease_break_callback(struct file_lease *fl)
 {
kill_fasync(>fl_fasync, SIGIO, POLL_MSG);
return false;
 }
 
 static void
-lease_setup(struct file_lock *fl, void **priv)
+lease_setup(struct file_lease *fl, void **priv)
 {
struct file *filp = fl->fl_core.flc_file;
struct fasync_struct *fa = *priv;
@@ -541,7 +576,7 @@ lease_setup(struct file_lock *fl, void

[PATCH v2 40/41] filelock: remove temporary compatability macros

2024-01-25 Thread Jeff Layton
Everything has been converted to access fl_core fields directly, so we
can now drop these.

Signed-off-by: Jeff Layton 
---
 include/linux/filelock.h | 16 
 1 file changed, 16 deletions(-)

diff --git a/include/linux/filelock.h b/include/linux/filelock.h
index 9ddf27faba94..c887fce6dbf9 100644
--- a/include/linux/filelock.h
+++ b/include/linux/filelock.h
@@ -105,22 +105,6 @@ struct file_lock_core {
struct file *flc_file;
 };
 
-/* Temporary macros to allow building during coccinelle conversion */
-#ifdef _NEED_FILE_LOCK_FIELD_MACROS
-#define fl_list fl_core.flc_list
-#define fl_blocker fl_core.flc_blocker
-#define fl_link fl_core.flc_link
-#define fl_blocked_requests fl_core.flc_blocked_requests
-#define fl_blocked_member fl_core.flc_blocked_member
-#define fl_owner fl_core.flc_owner
-#define fl_flags fl_core.flc_flags
-#define fl_type fl_core.flc_type
-#define fl_pid fl_core.flc_pid
-#define fl_link_cpu fl_core.flc_link_cpu
-#define fl_wait fl_core.flc_wait
-#define fl_file fl_core.flc_file
-#endif
-
 struct file_lock {
struct file_lock_core fl_core;
loff_t fl_start;

-- 
2.43.0




[PATCH v2 39/41] smb/server: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/smb/server/smb2pdu.c | 45 ++---
 fs/smb/server/vfs.c | 15 +++
 2 files changed, 29 insertions(+), 31 deletions(-)

diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
index d12d11cdea29..1a1ce70c7b2d 100644
--- a/fs/smb/server/smb2pdu.c
+++ b/fs/smb/server/smb2pdu.c
@@ -12,7 +12,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include "glob.h"
@@ -6761,10 +6760,10 @@ struct file_lock *smb_flock_init(struct file *f)
 
locks_init_lock(fl);
 
-   fl->fl_owner = f;
-   fl->fl_pid = current->tgid;
-   fl->fl_file = f;
-   fl->fl_flags = FL_POSIX;
+   fl->fl_core.flc_owner = f;
+   fl->fl_core.flc_pid = current->tgid;
+   fl->fl_core.flc_file = f;
+   fl->fl_core.flc_flags = FL_POSIX;
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
 
@@ -6781,30 +6780,30 @@ static int smb2_set_flock_flags(struct file_lock 
*flock, int flags)
case SMB2_LOCKFLAG_SHARED:
ksmbd_debug(SMB, "received shared request\n");
cmd = F_SETLKW;
-   flock->fl_type = F_RDLCK;
-   flock->fl_flags |= FL_SLEEP;
+   flock->fl_core.flc_type = F_RDLCK;
+   flock->fl_core.flc_flags |= FL_SLEEP;
break;
case SMB2_LOCKFLAG_EXCLUSIVE:
ksmbd_debug(SMB, "received exclusive request\n");
cmd = F_SETLKW;
-   flock->fl_type = F_WRLCK;
-   flock->fl_flags |= FL_SLEEP;
+   flock->fl_core.flc_type = F_WRLCK;
+   flock->fl_core.flc_flags |= FL_SLEEP;
break;
case SMB2_LOCKFLAG_SHARED | SMB2_LOCKFLAG_FAIL_IMMEDIATELY:
ksmbd_debug(SMB,
"received shared & fail immediately request\n");
cmd = F_SETLK;
-   flock->fl_type = F_RDLCK;
+   flock->fl_core.flc_type = F_RDLCK;
break;
case SMB2_LOCKFLAG_EXCLUSIVE | SMB2_LOCKFLAG_FAIL_IMMEDIATELY:
ksmbd_debug(SMB,
"received exclusive & fail immediately request\n");
cmd = F_SETLK;
-   flock->fl_type = F_WRLCK;
+   flock->fl_core.flc_type = F_WRLCK;
break;
case SMB2_LOCKFLAG_UNLOCK:
ksmbd_debug(SMB, "received unlock request\n");
-   flock->fl_type = F_UNLCK;
+   flock->fl_core.flc_type = F_UNLCK;
cmd = F_SETLK;
break;
}
@@ -6842,13 +6841,13 @@ static void smb2_remove_blocked_lock(void **argv)
struct file_lock *flock = (struct file_lock *)argv[0];
 
ksmbd_vfs_posix_lock_unblock(flock);
-   wake_up(>fl_wait);
+   wake_up(>fl_core.flc_wait);
 }
 
 static inline bool lock_defer_pending(struct file_lock *fl)
 {
/* check pending lock waiters */
-   return waitqueue_active(>fl_wait);
+   return waitqueue_active(>fl_core.flc_wait);
 }
 
 /**
@@ -6939,8 +6938,8 @@ int smb2_lock(struct ksmbd_work *work)
list_for_each_entry(cmp_lock, _list, llist) {
if (cmp_lock->fl->fl_start <= flock->fl_start &&
cmp_lock->fl->fl_end >= flock->fl_end) {
-   if (cmp_lock->fl->fl_type != F_UNLCK &&
-   flock->fl_type != F_UNLCK) {
+   if (cmp_lock->fl->fl_core.flc_type != F_UNLCK &&
+   flock->fl_core.flc_type != F_UNLCK) {
pr_err("conflict two locks in one 
request\n");
err = -EINVAL;
locks_free_lock(flock);
@@ -6988,12 +6987,12 @@ int smb2_lock(struct ksmbd_work *work)
list_for_each_entry(conn, _list, conns_list) {
spin_lock(>llist_lock);
list_for_each_entry_safe(cmp_lock, tmp2, 
>lock_list, clist) {
-   if (file_inode(cmp_lock->fl->fl_file) !=
-   file_inode(smb_lock->fl->fl_file))
+   if (file_inode(cmp_lock->fl->fl_core.flc_file) 
!=
+   file_inode(smb_lock->fl->fl_core.flc_file))
continue;
 
-   if (smb_lock->fl->fl_type == F_UNLCK) {
-  

[PATCH v2 38/41] smb/client: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/smb/client/cifsglob.h |  1 -
 fs/smb/client/cifssmb.c  |  9 +++---
 fs/smb/client/file.c | 75 
 fs/smb/client/smb2file.c |  3 +-
 4 files changed, 43 insertions(+), 45 deletions(-)

diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
index fcda4c77c649..20036fb16cec 100644
--- a/fs/smb/client/cifsglob.h
+++ b/fs/smb/client/cifsglob.h
@@ -26,7 +26,6 @@
 #include 
 #include "../common/smb2pdu.h"
 #include "smb2pdu.h"
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #define SMB_PATH_MAX 260
diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
index e19ecf692c20..aae4e9ddc59d 100644
--- a/fs/smb/client/cifssmb.c
+++ b/fs/smb/client/cifssmb.c
@@ -15,7 +15,6 @@
  /* want to reuse a stale file handle and only the caller knows the file info 
*/
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -2067,20 +2066,20 @@ CIFSSMBPosixLock(const unsigned int xid, struct 
cifs_tcon *tcon,
parm_data = (struct cifs_posix_lock *)
((char *)>hdr.Protocol + data_offset);
if (parm_data->lock_type == cpu_to_le16(CIFS_UNLCK))
-   pLockData->fl_type = F_UNLCK;
+   pLockData->fl_core.flc_type = F_UNLCK;
else {
if (parm_data->lock_type ==
cpu_to_le16(CIFS_RDLCK))
-   pLockData->fl_type = F_RDLCK;
+   pLockData->fl_core.flc_type = F_RDLCK;
else if (parm_data->lock_type ==
cpu_to_le16(CIFS_WRLCK))
-   pLockData->fl_type = F_WRLCK;
+   pLockData->fl_core.flc_type = F_WRLCK;
 
pLockData->fl_start = le64_to_cpu(parm_data->start);
pLockData->fl_end = pLockData->fl_start +
(le64_to_cpu(parm_data->length) ?
 le64_to_cpu(parm_data->length) - 1 : 0);
-   pLockData->fl_pid = -le32_to_cpu(parm_data->pid);
+   pLockData->fl_core.flc_pid = 
-le32_to_cpu(parm_data->pid);
}
}
 
diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
index dd87b2ef24dc..9a977ec0fb2f 100644
--- a/fs/smb/client/file.c
+++ b/fs/smb/client/file.c
@@ -9,7 +9,6 @@
  *
  */
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -1313,20 +1312,20 @@ cifs_lock_test(struct cifsFileInfo *cfile, __u64 
offset, __u64 length,
down_read(>lock_sem);
 
exist = cifs_find_lock_conflict(cfile, offset, length, type,
-   flock->fl_flags, _lock,
+   flock->fl_core.flc_flags, _lock,
CIFS_LOCK_OP);
if (exist) {
flock->fl_start = conf_lock->offset;
flock->fl_end = conf_lock->offset + conf_lock->length - 1;
-   flock->fl_pid = conf_lock->pid;
+   flock->fl_core.flc_pid = conf_lock->pid;
if (conf_lock->type & server->vals->shared_lock_type)
-   flock->fl_type = F_RDLCK;
+   flock->fl_core.flc_type = F_RDLCK;
else
-   flock->fl_type = F_WRLCK;
+   flock->fl_core.flc_type = F_WRLCK;
} else if (!cinode->can_cache_brlcks)
rc = 1;
else
-   flock->fl_type = F_UNLCK;
+   flock->fl_core.flc_type = F_UNLCK;
 
up_read(>lock_sem);
return rc;
@@ -1402,16 +1401,16 @@ cifs_posix_lock_test(struct file *file, struct 
file_lock *flock)
 {
int rc = 0;
struct cifsInodeInfo *cinode = CIFS_I(file_inode(file));
-   unsigned char saved_type = flock->fl_type;
+   unsigned char saved_type = flock->fl_core.flc_type;
 
-   if ((flock->fl_flags & FL_POSIX) == 0)
+   if ((flock->fl_core.flc_flags & FL_POSIX) == 0)
return 1;
 
down_read(>lock_sem);
posix_test_lock(file, flock);
 
-   if (flock->fl_type == F_UNLCK && !cinode->can_cache_brlcks) {
-   flock->fl_type = saved_type;
+   if (flock->fl_core.flc_type == F_UNLCK && !cinode->can_cache_brlcks) {
+   flock->fl_core.flc_type = saved_type;
rc = 1;
}
 
@@ -1432,7 +1431,7 @@ cifs_posix_lock_set(struct file *file, struct file_lock 
*flock)

[PATCH v2 37/41] ocfs2: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/ocfs2/locks.c  | 13 ++---
 fs/ocfs2/stack_user.c |  3 +--
 2 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/fs/ocfs2/locks.c b/fs/ocfs2/locks.c
index 8a9970dc852e..b86df9b59719 100644
--- a/fs/ocfs2/locks.c
+++ b/fs/ocfs2/locks.c
@@ -8,7 +8,6 @@
  */
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
@@ -28,7 +27,7 @@ static int ocfs2_do_flock(struct file *file, struct inode 
*inode,
struct ocfs2_file_private *fp = file->private_data;
struct ocfs2_lock_res *lockres = >fp_flock;
 
-   if (fl->fl_type == F_WRLCK)
+   if (fl->fl_core.flc_type == F_WRLCK)
level = 1;
if (!IS_SETLKW(cmd))
trylock = 1;
@@ -54,8 +53,8 @@ static int ocfs2_do_flock(struct file *file, struct inode 
*inode,
 */
 
locks_init_lock();
-   request.fl_type = F_UNLCK;
-   request.fl_flags = FL_FLOCK;
+   request.fl_core.flc_type = F_UNLCK;
+   request.fl_core.flc_flags = FL_FLOCK;
locks_lock_file_wait(file, );
 
ocfs2_file_unlock(file);
@@ -101,14 +100,14 @@ int ocfs2_flock(struct file *file, int cmd, struct 
file_lock *fl)
struct inode *inode = file->f_mapping->host;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
 
-   if (!(fl->fl_flags & FL_FLOCK))
+   if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
 
if ((osb->s_mount_opt & OCFS2_MOUNT_LOCALFLOCKS) ||
ocfs2_mount_local(osb))
return locks_lock_file_wait(file, fl);
 
-   if (fl->fl_type == F_UNLCK)
+   if (fl->fl_core.flc_type == F_UNLCK)
return ocfs2_do_funlock(file, cmd, fl);
else
return ocfs2_do_flock(file, inode, cmd, fl);
@@ -119,7 +118,7 @@ int ocfs2_lock(struct file *file, int cmd, struct file_lock 
*fl)
struct inode *inode = file->f_mapping->host;
struct ocfs2_super *osb = OCFS2_SB(inode->i_sb);
 
-   if (!(fl->fl_flags & FL_POSIX))
+   if (!(fl->fl_core.flc_flags & FL_POSIX))
return -ENOLCK;
 
return ocfs2_plock(osb->cconn, OCFS2_I(inode)->ip_blkno, file, cmd, fl);
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index 460c882c5384..70fa466746d3 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -9,7 +9,6 @@
 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -745,7 +744,7 @@ static int user_plock(struct ocfs2_cluster_connection *conn,
return dlm_posix_cancel(conn->cc_lockspace, ino, file, fl);
else if (IS_GETLK(cmd))
return dlm_posix_get(conn->cc_lockspace, ino, file, fl);
-   else if (fl->fl_type == F_UNLCK)
+   else if (fl->fl_core.flc_type == F_UNLCK)
return dlm_posix_unlock(conn->cc_lockspace, ino, file, fl);
else
return dlm_posix_lock(conn->cc_lockspace, ino, file, cmd, fl);

-- 
2.43.0




[PATCH v2 36/41] nfsd: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/nfsd/filecache.c|  4 +--
 fs/nfsd/netns.h|  1 -
 fs/nfsd/nfs4callback.c |  2 +-
 fs/nfsd/nfs4layouts.c  | 15 +-
 fs/nfsd/nfs4state.c| 77 +-
 5 files changed, 50 insertions(+), 49 deletions(-)

diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
index 9cb7f0c33df5..cdd36758c692 100644
--- a/fs/nfsd/filecache.c
+++ b/fs/nfsd/filecache.c
@@ -662,8 +662,8 @@ nfsd_file_lease_notifier_call(struct notifier_block *nb, 
unsigned long arg,
struct file_lock *fl = data;
 
/* Only close files for F_SETLEASE leases */
-   if (fl->fl_flags & FL_LEASE)
-   nfsd_file_close_inode(file_inode(fl->fl_file));
+   if (fl->fl_core.flc_flags & FL_LEASE)
+   nfsd_file_close_inode(file_inode(fl->fl_core.flc_file));
return 0;
 }
 
diff --git a/fs/nfsd/netns.h b/fs/nfsd/netns.h
index fd91125208be..74b4360779a1 100644
--- a/fs/nfsd/netns.h
+++ b/fs/nfsd/netns.h
@@ -10,7 +10,6 @@
 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
index 926c29879c6a..3513c94481b4 100644
--- a/fs/nfsd/nfs4callback.c
+++ b/fs/nfsd/nfs4callback.c
@@ -674,7 +674,7 @@ static void nfs4_xdr_enc_cb_notify_lock(struct rpc_rqst 
*req,
const struct nfsd4_callback *cb = data;
const struct nfsd4_blocked_lock *nbl =
container_of(cb, struct nfsd4_blocked_lock, nbl_cb);
-   struct nfs4_lockowner *lo = (struct nfs4_lockowner 
*)nbl->nbl_lock.fl_owner;
+   struct nfs4_lockowner *lo = (struct nfs4_lockowner 
*)nbl->nbl_lock.fl_core.flc_owner;
struct nfs4_cb_compound_hdr hdr = {
.ident = 0,
.minorversion = cb->cb_clp->cl_minorversion,
diff --git a/fs/nfsd/nfs4layouts.c b/fs/nfsd/nfs4layouts.c
index 5e8096bc5eaa..ddf221d31acf 100644
--- a/fs/nfsd/nfs4layouts.c
+++ b/fs/nfsd/nfs4layouts.c
@@ -193,14 +193,15 @@ nfsd4_layout_setlease(struct nfs4_layout_stateid *ls)
return -ENOMEM;
locks_init_lock(fl);
fl->fl_lmops = _layouts_lm_ops;
-   fl->fl_flags = FL_LAYOUT;
-   fl->fl_type = F_RDLCK;
+   fl->fl_core.flc_flags = FL_LAYOUT;
+   fl->fl_core.flc_type = F_RDLCK;
fl->fl_end = OFFSET_MAX;
-   fl->fl_owner = ls;
-   fl->fl_pid = current->tgid;
-   fl->fl_file = ls->ls_file->nf_file;
+   fl->fl_core.flc_owner = ls;
+   fl->fl_core.flc_pid = current->tgid;
+   fl->fl_core.flc_file = ls->ls_file->nf_file;
 
-   status = vfs_setlease(fl->fl_file, fl->fl_type, , NULL);
+   status = vfs_setlease(fl->fl_core.flc_file, fl->fl_core.flc_type, ,
+ NULL);
if (status) {
locks_free_lock(fl);
return status;
@@ -731,7 +732,7 @@ nfsd4_layout_lm_break(struct file_lock *fl)
 * in time:
 */
fl->fl_break_time = 0;
-   nfsd4_recall_file_layout(fl->fl_owner);
+   nfsd4_recall_file_layout(fl->fl_core.flc_owner);
return false;
 }
 
diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index f66e67394157..5899e5778fe7 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -4924,7 +4924,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation 
*dp)
 static bool
 nfsd_break_deleg_cb(struct file_lock *fl)
 {
-   struct nfs4_delegation *dp = (struct nfs4_delegation *)fl->fl_owner;
+   struct nfs4_delegation *dp = (struct nfs4_delegation *) 
fl->fl_core.flc_owner;
struct nfs4_file *fp = dp->dl_stid.sc_file;
struct nfs4_client *clp = dp->dl_stid.sc_client;
struct nfsd_net *nn;
@@ -4962,7 +4962,7 @@ nfsd_break_deleg_cb(struct file_lock *fl)
  */
 static bool nfsd_breaker_owns_lease(struct file_lock *fl)
 {
-   struct nfs4_delegation *dl = fl->fl_owner;
+   struct nfs4_delegation *dl = fl->fl_core.flc_owner;
struct svc_rqst *rqst;
struct nfs4_client *clp;
 
@@ -4980,7 +4980,7 @@ static int
 nfsd_change_deleg_cb(struct file_lock *onlist, int arg,
 struct list_head *dispose)
 {
-   struct nfs4_delegation *dp = (struct nfs4_delegation *)onlist->fl_owner;
+   struct nfs4_delegation *dp = (struct nfs4_delegation *) 
onlist->fl_core.flc_owner;
struct nfs4_client *clp = dp->dl_stid.sc_client;
 
if (arg & F_UNLCK) {
@@ -5340,12 +5340,12 @@ static struct file_lock *nfs4_alloc_init_lease(struct 
nfs4_delegation *dp,
if (!fl)
return NULL;
fl->fl_lmops = _lease_mng_ops;
-   fl->fl_flags = FL_DELEG;
-   fl->fl_type = flag == NFS4_OPEN_DELEGATE_

[PATCH v2 35/41] nfs: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/nfs/delegation.c |  4 ++--
 fs/nfs/file.c   | 23 +++
 fs/nfs/nfs3proc.c   |  2 +-
 fs/nfs/nfs4_fs.h|  1 -
 fs/nfs/nfs4proc.c   | 35 +++
 fs/nfs/nfs4state.c  |  6 +++---
 fs/nfs/nfs4trace.h  |  4 ++--
 fs/nfs/nfs4xdr.c|  8 
 fs/nfs/write.c  |  9 -
 9 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
index fa1a14def45c..c308db36e932 100644
--- a/fs/nfs/delegation.c
+++ b/fs/nfs/delegation.c
@@ -156,8 +156,8 @@ static int nfs_delegation_claim_locks(struct nfs4_state 
*state, const nfs4_state
list = >flc_posix;
spin_lock(>flc_lock);
 restart:
-   list_for_each_entry(fl, list, fl_list) {
-   if (nfs_file_open_context(fl->fl_file)->state != state)
+   list_for_each_entry(fl, list, fl_core.flc_list) {
+   if (nfs_file_open_context(fl->fl_core.flc_file)->state != state)
continue;
spin_unlock(>flc_lock);
status = nfs4_lock_delegation_recall(fl, state, stateid);
diff --git a/fs/nfs/file.c b/fs/nfs/file.c
index 3c9a8ad91540..fb3cd614e36e 100644
--- a/fs/nfs/file.c
+++ b/fs/nfs/file.c
@@ -31,7 +31,6 @@
 #include 
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 
 #include "delegation.h"
@@ -721,15 +720,15 @@ do_getlk(struct file *filp, int cmd, struct file_lock 
*fl, int is_local)
 {
struct inode *inode = filp->f_mapping->host;
int status = 0;
-   unsigned int saved_type = fl->fl_type;
+   unsigned int saved_type = fl->fl_core.flc_type;
 
/* Try local locking first */
posix_test_lock(filp, fl);
-   if (fl->fl_type != F_UNLCK) {
+   if (fl->fl_core.flc_type != F_UNLCK) {
/* found a conflict */
goto out;
}
-   fl->fl_type = saved_type;
+   fl->fl_core.flc_type = saved_type;
 
if (NFS_PROTO(inode)->have_delegation(inode, FMODE_READ))
goto out_noconflict;
@@ -741,7 +740,7 @@ do_getlk(struct file *filp, int cmd, struct file_lock *fl, 
int is_local)
 out:
return status;
 out_noconflict:
-   fl->fl_type = F_UNLCK;
+   fl->fl_core.flc_type = F_UNLCK;
goto out;
 }
 
@@ -766,7 +765,7 @@ do_unlk(struct file *filp, int cmd, struct file_lock *fl, 
int is_local)
 *  If we're signalled while cleaning up locks on process 
exit, we
 *  still need to complete the unlock.
 */
-   if (status < 0 && !(fl->fl_flags & FL_CLOSE))
+   if (status < 0 && !(fl->fl_core.flc_flags & FL_CLOSE))
return status;
}
 
@@ -833,12 +832,12 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock 
*fl)
int is_local = 0;
 
dprintk("NFS: lock(%pD2, t=%x, fl=%x, r=%lld:%lld)\n",
-   filp, fl->fl_type, fl->fl_flags,
+   filp, fl->fl_core.flc_type, fl->fl_core.flc_flags,
(long long)fl->fl_start, (long long)fl->fl_end);
 
nfs_inc_stats(inode, NFSIOS_VFSLOCK);
 
-   if (fl->fl_flags & FL_RECLAIM)
+   if (fl->fl_core.flc_flags & FL_RECLAIM)
return -ENOGRACE;
 
if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FCNTL)
@@ -852,7 +851,7 @@ int nfs_lock(struct file *filp, int cmd, struct file_lock 
*fl)
 
if (IS_GETLK(cmd))
ret = do_getlk(filp, cmd, fl, is_local);
-   else if (fl->fl_type == F_UNLCK)
+   else if (fl->fl_core.flc_type == F_UNLCK)
ret = do_unlk(filp, cmd, fl, is_local);
else
ret = do_setlk(filp, cmd, fl, is_local);
@@ -870,16 +869,16 @@ int nfs_flock(struct file *filp, int cmd, struct 
file_lock *fl)
int is_local = 0;
 
dprintk("NFS: flock(%pD2, t=%x, fl=%x)\n",
-   filp, fl->fl_type, fl->fl_flags);
+   filp, fl->fl_core.flc_type, fl->fl_core.flc_flags);
 
-   if (!(fl->fl_flags & FL_FLOCK))
+   if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
 
if (NFS_SERVER(inode)->flags & NFS_MOUNT_LOCAL_FLOCK)
is_local = 1;
 
/* We're simulating flock() locks using posix locks on the server */
-   if (fl->fl_type == F_UNLCK)
+   if (fl->fl_core.flc_type == F_UNLCK)
return do_unlk(filp, cmd, fl, is_local);
return do_setlk(filp, cmd, fl, is_local);
 }
diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c
index 2de66e4e8280..650ec250d7e5 10

[PATCH v2 34/41] lockd: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/lockd/clnt4xdr.c | 14 +-
 fs/lockd/clntlock.c |  2 +-
 fs/lockd/clntproc.c | 62 +++
 fs/lockd/clntxdr.c  | 14 +-
 fs/lockd/svc4proc.c | 10 +++
 fs/lockd/svclock.c  | 64 +++--
 fs/lockd/svcproc.c  | 10 +++
 fs/lockd/svcsubs.c  | 24 -
 fs/lockd/xdr.c  | 14 +-
 fs/lockd/xdr4.c | 14 +-
 include/linux/lockd/lockd.h |  8 +++---
 include/linux/lockd/xdr.h   |  1 -
 12 files changed, 121 insertions(+), 116 deletions(-)

diff --git a/fs/lockd/clnt4xdr.c b/fs/lockd/clnt4xdr.c
index 8161667c976f..de58ec4ff374 100644
--- a/fs/lockd/clnt4xdr.c
+++ b/fs/lockd/clnt4xdr.c
@@ -243,7 +243,7 @@ static void encode_nlm4_holder(struct xdr_stream *xdr,
u64 l_offset, l_len;
__be32 *p;
 
-   encode_bool(xdr, lock->fl.fl_type == F_RDLCK);
+   encode_bool(xdr, lock->fl.fl_core.flc_type == F_RDLCK);
encode_int32(xdr, lock->svid);
encode_netobj(xdr, lock->oh.data, lock->oh.len);
 
@@ -270,7 +270,7 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, 
struct nlm_res *result)
goto out_overflow;
exclusive = be32_to_cpup(p++);
lock->svid = be32_to_cpup(p);
-   fl->fl_pid = (pid_t)lock->svid;
+   fl->fl_core.flc_pid = (pid_t)lock->svid;
 
error = decode_netobj(xdr, >oh);
if (unlikely(error))
@@ -280,8 +280,8 @@ static int decode_nlm4_holder(struct xdr_stream *xdr, 
struct nlm_res *result)
if (unlikely(p == NULL))
goto out_overflow;
 
-   fl->fl_flags = FL_POSIX;
-   fl->fl_type  = exclusive != 0 ? F_WRLCK : F_RDLCK;
+   fl->fl_core.flc_flags = FL_POSIX;
+   fl->fl_core.flc_type  = exclusive != 0 ? F_WRLCK : F_RDLCK;
p = xdr_decode_hyper(p, _offset);
xdr_decode_hyper(p, _len);
nlm4svc_set_file_lock_range(fl, l_offset, l_len);
@@ -357,7 +357,7 @@ static void nlm4_xdr_enc_testargs(struct rpc_rqst *req,
const struct nlm_lock *lock = >lock;
 
encode_cookie(xdr, >cookie);
-   encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+   encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
 }
 
@@ -380,7 +380,7 @@ static void nlm4_xdr_enc_lockargs(struct rpc_rqst *req,
 
encode_cookie(xdr, >cookie);
encode_bool(xdr, args->block);
-   encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+   encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
encode_bool(xdr, args->reclaim);
encode_int32(xdr, args->state);
@@ -403,7 +403,7 @@ static void nlm4_xdr_enc_cancargs(struct rpc_rqst *req,
 
encode_cookie(xdr, >cookie);
encode_bool(xdr, args->block);
-   encode_bool(xdr, lock->fl.fl_type == F_WRLCK);
+   encode_bool(xdr, lock->fl.fl_core.flc_type == F_WRLCK);
encode_nlm4_lock(xdr, lock);
 }
 
diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
index 5d85715be763..eaa463f2d44d 100644
--- a/fs/lockd/clntlock.c
+++ b/fs/lockd/clntlock.c
@@ -185,7 +185,7 @@ __be32 nlmclnt_grant(const struct sockaddr *addr, const 
struct nlm_lock *lock)
continue;
if (!rpc_cmp_addr(nlm_addr(block->b_host), addr))
continue;
-   if (nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_file)), fh) 
!= 0)
+   if 
(nfs_compare_fh(NFS_FH(file_inode(fl_blocked->fl_core.flc_file)), fh) != 0)
continue;
/* Alright, we found a lock. Set the return status
 * and wake up the caller
diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index 1f71260603b7..0b8d0297523f 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -12,7 +12,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -134,7 +133,8 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, 
struct file_lock *fl)
char *nodename = req->a_host->h_rpcclnt->cl_nodename;
 
nlmclnt_next_cookie(>cookie);
-   memcpy(>fh, NFS_FH(file_inode(fl->fl_file)), sizeof(struct 
nfs_fh));
+   memcpy(>fh, NFS_FH(file_inode(fl->fl_core.flc_file)),
+  sizeof(struct nfs_fh));
lock->caller  = nodename;
lock->oh.data = req->a_owner;
lock->oh.len  = snprintf(req->a_owner, sizeof(req->a_owner), "%u@%s",
@@ -143,7 +143,7 @@ static void nlmclnt_setlockargs(struct nlm_rqst *req, 
struct file_lock *fl)
lock->sv

[PATCH v2 33/41] gfs2: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/gfs2/file.c | 17 -
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
index 9e7cd054e924..dc0c4f7d7cc7 100644
--- a/fs/gfs2/file.c
+++ b/fs/gfs2/file.c
@@ -15,7 +15,6 @@
 #include 
 #include 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -1441,10 +1440,10 @@ static int gfs2_lock(struct file *file, int cmd, struct 
file_lock *fl)
struct gfs2_sbd *sdp = GFS2_SB(file->f_mapping->host);
struct lm_lockstruct *ls = >sd_lockstruct;
 
-   if (!(fl->fl_flags & FL_POSIX))
+   if (!(fl->fl_core.flc_flags & FL_POSIX))
return -ENOLCK;
if (gfs2_withdrawing_or_withdrawn(sdp)) {
-   if (fl->fl_type == F_UNLCK)
+   if (fl->fl_core.flc_type == F_UNLCK)
locks_lock_file_wait(file, fl);
return -EIO;
}
@@ -1452,7 +1451,7 @@ static int gfs2_lock(struct file *file, int cmd, struct 
file_lock *fl)
return dlm_posix_cancel(ls->ls_dlm, ip->i_no_addr, file, fl);
else if (IS_GETLK(cmd))
return dlm_posix_get(ls->ls_dlm, ip->i_no_addr, file, fl);
-   else if (fl->fl_type == F_UNLCK)
+   else if (fl->fl_core.flc_type == F_UNLCK)
return dlm_posix_unlock(ls->ls_dlm, ip->i_no_addr, file, fl);
else
return dlm_posix_lock(ls->ls_dlm, ip->i_no_addr, file, cmd, fl);
@@ -1484,7 +1483,7 @@ static int do_flock(struct file *file, int cmd, struct 
file_lock *fl)
int error = 0;
int sleeptime;
 
-   state = (fl->fl_type == F_WRLCK) ? LM_ST_EXCLUSIVE : LM_ST_SHARED;
+   state = (fl->fl_core.flc_type == F_WRLCK) ? LM_ST_EXCLUSIVE : 
LM_ST_SHARED;
flags = GL_EXACT | GL_NOPID;
if (!IS_SETLKW(cmd))
flags |= LM_FLAG_TRY_1CB;
@@ -1496,8 +1495,8 @@ static int do_flock(struct file *file, int cmd, struct 
file_lock *fl)
if (fl_gh->gh_state == state)
goto out;
locks_init_lock();
-   request.fl_type = F_UNLCK;
-   request.fl_flags = FL_FLOCK;
+   request.fl_core.flc_type = F_UNLCK;
+   request.fl_core.flc_flags = FL_FLOCK;
locks_lock_file_wait(file, );
gfs2_glock_dq(fl_gh);
gfs2_holder_reinit(state, flags, fl_gh);
@@ -1558,10 +1557,10 @@ static void do_unflock(struct file *file, struct 
file_lock *fl)
 
 static int gfs2_flock(struct file *file, int cmd, struct file_lock *fl)
 {
-   if (!(fl->fl_flags & FL_FLOCK))
+   if (!(fl->fl_core.flc_flags & FL_FLOCK))
return -ENOLCK;
 
-   if (fl->fl_type == F_UNLCK) {
+   if (fl->fl_core.flc_type == F_UNLCK) {
do_unflock(file, fl);
return 0;
} else {

-- 
2.43.0




[PATCH v2 32/41] dlm: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/dlm/plock.c | 45 ++---
 1 file changed, 22 insertions(+), 23 deletions(-)

diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index b89dca1d51b0..b3e9fb9df808 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -4,7 +4,6 @@
  */
 
 #include 
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 #include 
@@ -139,14 +138,14 @@ int dlm_posix_lock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
}
 
op->info.optype = DLM_PLOCK_OP_LOCK;
-   op->info.pid= fl->fl_pid;
-   op->info.ex = (fl->fl_type == F_WRLCK);
-   op->info.wait   = !!(fl->fl_flags & FL_SLEEP);
+   op->info.pid= fl->fl_core.flc_pid;
+   op->info.ex = (fl->fl_core.flc_type == F_WRLCK);
+   op->info.wait   = !!(fl->fl_core.flc_flags & FL_SLEEP);
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
op->info.start  = fl->fl_start;
op->info.end= fl->fl_end;
-   op->info.owner = (__u64)(long)fl->fl_owner;
+   op->info.owner = (__u64)(long) fl->fl_core.flc_owner;
/* async handling */
if (fl->fl_lmops && fl->fl_lmops->lm_grant) {
op_data = kzalloc(sizeof(*op_data), GFP_NOFS);
@@ -259,7 +258,7 @@ static int dlm_plock_callback(struct plock_op *op)
}
 
/* got fs lock; bookkeep locally as well: */
-   flc->fl_flags &= ~FL_SLEEP;
+   flc->fl_core.flc_flags &= ~FL_SLEEP;
if (posix_lock_file(file, flc, NULL)) {
/*
 * This can only happen in the case of kmalloc() failure.
@@ -292,7 +291,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
struct dlm_ls *ls;
struct plock_op *op;
int rv;
-   unsigned char saved_flags = fl->fl_flags;
+   unsigned char saved_flags = fl->fl_core.flc_flags;
 
ls = dlm_find_lockspace_local(lockspace);
if (!ls)
@@ -305,7 +304,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
}
 
/* cause the vfs unlock to return ENOENT if lock is not found */
-   fl->fl_flags |= FL_EXISTS;
+   fl->fl_core.flc_flags |= FL_EXISTS;
 
rv = locks_lock_file_wait(file, fl);
if (rv == -ENOENT) {
@@ -318,14 +317,14 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
}
 
op->info.optype = DLM_PLOCK_OP_UNLOCK;
-   op->info.pid= fl->fl_pid;
+   op->info.pid= fl->fl_core.flc_pid;
op->info.fsid   = ls->ls_global_id;
op->info.number = number;
op->info.start  = fl->fl_start;
op->info.end= fl->fl_end;
-   op->info.owner = (__u64)(long)fl->fl_owner;
+   op->info.owner = (__u64)(long) fl->fl_core.flc_owner;
 
-   if (fl->fl_flags & FL_CLOSE) {
+   if (fl->fl_core.flc_flags & FL_CLOSE) {
op->info.flags |= DLM_PLOCK_FL_CLOSE;
send_op(op);
rv = 0;
@@ -346,7 +345,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
dlm_release_plock_op(op);
 out:
dlm_put_lockspace(ls);
-   fl->fl_flags = saved_flags;
+   fl->fl_core.flc_flags = saved_flags;
return rv;
 }
 EXPORT_SYMBOL_GPL(dlm_posix_unlock);
@@ -376,14 +375,14 @@ int dlm_posix_cancel(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
return -EINVAL;
 
memset(, 0, sizeof(info));
-   info.pid = fl->fl_pid;
-   info.ex = (fl->fl_type == F_WRLCK);
+   info.pid = fl->fl_core.flc_pid;
+   info.ex = (fl->fl_core.flc_type == F_WRLCK);
info.fsid = ls->ls_global_id;
dlm_put_lockspace(ls);
info.number = number;
info.start = fl->fl_start;
info.end = fl->fl_end;
-   info.owner = (__u64)(long)fl->fl_owner;
+   info.owner = (__u64)(long) fl->fl_core.flc_owner;
 
rv = do_lock_cancel();
switch (rv) {
@@ -438,13 +437,13 @@ int dlm_posix_get(dlm_lockspace_t *lockspace, u64 number, 
struct file *file,
}
 
op->info.optype = DLM_PLOCK_OP_GET;
-   op->info.pid= fl->fl_pid;
-   op->info.ex = (fl->fl_type == F_WRLCK);
+   op->info.pid= fl->fl_core.flc_pid;
+   op->info.ex = (fl->fl_core.flc_type == F_WRLCK);
op->info.fsid 

[PATCH v2 31/41] ceph: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/ceph/locks.c | 75 +
 1 file changed, 38 insertions(+), 37 deletions(-)

diff --git a/fs/ceph/locks.c b/fs/ceph/locks.c
index ccb358c398ca..89e44e7543eb 100644
--- a/fs/ceph/locks.c
+++ b/fs/ceph/locks.c
@@ -7,7 +7,6 @@
 
 #include "super.h"
 #include "mds_client.h"
-#define _NEED_FILE_LOCK_FIELD_MACROS
 #include 
 #include 
 
@@ -34,7 +33,7 @@ void __init ceph_flock_init(void)
 
 static void ceph_fl_copy_lock(struct file_lock *dst, struct file_lock *src)
 {
-   struct inode *inode = file_inode(dst->fl_file);
+   struct inode *inode = file_inode(dst->fl_core.flc_file);
atomic_inc(_inode(inode)->i_filelock_ref);
dst->fl_u.ceph.inode = igrab(inode);
 }
@@ -111,17 +110,18 @@ static int ceph_lock_message(u8 lock_type, u16 operation, 
struct inode *inode,
else
length = fl->fl_end - fl->fl_start + 1;
 
-   owner = secure_addr(fl->fl_owner);
+   owner = secure_addr(fl->fl_core.flc_owner);
 
doutc(cl, "rule: %d, op: %d, owner: %llx, pid: %llu, "
"start: %llu, length: %llu, wait: %d, type: %d\n",
-   (int)lock_type, (int)operation, owner, (u64)fl->fl_pid,
-   fl->fl_start, length, wait, fl->fl_type);
+   (int)lock_type, (int)operation, owner,
+   (u64) fl->fl_core.flc_pid,
+   fl->fl_start, length, wait, fl->fl_core.flc_type);
 
req->r_args.filelock_change.rule = lock_type;
req->r_args.filelock_change.type = cmd;
req->r_args.filelock_change.owner = cpu_to_le64(owner);
-   req->r_args.filelock_change.pid = cpu_to_le64((u64)fl->fl_pid);
+   req->r_args.filelock_change.pid = cpu_to_le64((u64) 
fl->fl_core.flc_pid);
req->r_args.filelock_change.start = cpu_to_le64(fl->fl_start);
req->r_args.filelock_change.length = cpu_to_le64(length);
req->r_args.filelock_change.wait = wait;
@@ -131,13 +131,13 @@ static int ceph_lock_message(u8 lock_type, u16 operation, 
struct inode *inode,
err = ceph_mdsc_wait_request(mdsc, req, wait ?
ceph_lock_wait_for_completion : NULL);
if (!err && operation == CEPH_MDS_OP_GETFILELOCK) {
-   fl->fl_pid = 
-le64_to_cpu(req->r_reply_info.filelock_reply->pid);
+   fl->fl_core.flc_pid = 
-le64_to_cpu(req->r_reply_info.filelock_reply->pid);
if (CEPH_LOCK_SHARED == req->r_reply_info.filelock_reply->type)
-   fl->fl_type = F_RDLCK;
+   fl->fl_core.flc_type = F_RDLCK;
else if (CEPH_LOCK_EXCL == 
req->r_reply_info.filelock_reply->type)
-   fl->fl_type = F_WRLCK;
+   fl->fl_core.flc_type = F_WRLCK;
else
-   fl->fl_type = F_UNLCK;
+   fl->fl_core.flc_type = F_UNLCK;
 
fl->fl_start = 
le64_to_cpu(req->r_reply_info.filelock_reply->start);
length = le64_to_cpu(req->r_reply_info.filelock_reply->start) +
@@ -151,8 +151,8 @@ static int ceph_lock_message(u8 lock_type, u16 operation, 
struct inode *inode,
ceph_mdsc_put_request(req);
doutc(cl, "rule: %d, op: %d, pid: %llu, start: %llu, "
  "length: %llu, wait: %d, type: %d, err code %d\n",
- (int)lock_type, (int)operation, (u64)fl->fl_pid,
- fl->fl_start, length, wait, fl->fl_type, err);
+ (int)lock_type, (int)operation, (u64) fl->fl_core.flc_pid,
+ fl->fl_start, length, wait, fl->fl_core.flc_type, err);
return err;
 }
 
@@ -228,10 +228,10 @@ static int ceph_lock_wait_for_completion(struct 
ceph_mds_client *mdsc,
 static int try_unlock_file(struct file *file, struct file_lock *fl)
 {
int err;
-   unsigned int orig_flags = fl->fl_flags;
-   fl->fl_flags |= FL_EXISTS;
+   unsigned int orig_flags = fl->fl_core.flc_flags;
+   fl->fl_core.flc_flags |= FL_EXISTS;
err = locks_lock_file_wait(file, fl);
-   fl->fl_flags = orig_flags;
+   fl->fl_core.flc_flags = orig_flags;
if (err == -ENOENT) {
if (!(orig_flags & FL_EXISTS))
err = 0;
@@ -254,13 +254,13 @@ int ceph_lock(struct file *file, int cmd, struct 
file_lock *fl)
u8 wait = 0;
u8 lock_cmd;
 
-   if (!(fl->fl_flags & FL_POSIX))
+   if (!(fl->fl_core.flc_flags & FL_POSIX))
return -ENOLCK;
 
if (ceph_inode_is_shutdow

[PATCH v2 30/41] afs: adapt to breakup of struct file_lock

2024-01-25 Thread Jeff Layton
Most of the existing APIs have remained the same, but subsystems that
access file_lock fields directly need to reach into struct
file_lock_core now.

Signed-off-by: Jeff Layton 
---
 fs/afs/flock.c | 55 +++---
 fs/afs/internal.h  |  1 -
 include/trace/events/afs.h |  4 ++--
 3 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/fs/afs/flock.c b/fs/afs/flock.c
index e7feaf66bddf..34e7510526b9 100644
--- a/fs/afs/flock.c
+++ b/fs/afs/flock.c
@@ -93,13 +93,13 @@ static void afs_grant_locks(struct afs_vnode *vnode)
bool exclusive = (vnode->lock_type == AFS_LOCK_WRITE);
 
list_for_each_entry_safe(p, _p, >pending_locks, fl_u.afs.link) {
-   if (!exclusive && p->fl_type == F_WRLCK)
+   if (!exclusive && p->fl_core.flc_type == F_WRLCK)
continue;
 
list_move_tail(>fl_u.afs.link, >granted_locks);
p->fl_u.afs.state = AFS_LOCK_GRANTED;
trace_afs_flock_op(vnode, p, afs_flock_op_grant);
-   wake_up(>fl_wait);
+   wake_up(>fl_core.flc_wait);
}
 }
 
@@ -121,16 +121,16 @@ static void afs_next_locker(struct afs_vnode *vnode, int 
error)
 
list_for_each_entry_safe(p, _p, >pending_locks, fl_u.afs.link) {
if (error &&
-   p->fl_type == type &&
-   afs_file_key(p->fl_file) == key) {
+   p->fl_core.flc_type == type &&
+   afs_file_key(p->fl_core.flc_file) == key) {
list_del_init(>fl_u.afs.link);
p->fl_u.afs.state = error;
-   wake_up(>fl_wait);
+   wake_up(>fl_core.flc_wait);
}
 
/* Select the next locker to hand off to. */
if (next &&
-   (next->fl_type == F_WRLCK || p->fl_type == F_RDLCK))
+   (next->fl_core.flc_type == F_WRLCK || p->fl_core.flc_type 
== F_RDLCK))
continue;
next = p;
}
@@ -142,7 +142,7 @@ static void afs_next_locker(struct afs_vnode *vnode, int 
error)
afs_set_lock_state(vnode, AFS_VNODE_LOCK_SETTING);
next->fl_u.afs.state = AFS_LOCK_YOUR_TRY;
trace_afs_flock_op(vnode, next, afs_flock_op_wake);
-   wake_up(>fl_wait);
+   wake_up(>fl_core.flc_wait);
} else {
afs_set_lock_state(vnode, AFS_VNODE_LOCK_NONE);
trace_afs_flock_ev(vnode, NULL, afs_flock_no_lockers, 0);
@@ -166,7 +166,7 @@ static void afs_kill_lockers_enoent(struct afs_vnode *vnode)
   struct file_lock, fl_u.afs.link);
list_del_init(>fl_u.afs.link);
p->fl_u.afs.state = -ENOENT;
-   wake_up(>fl_wait);
+   wake_up(>fl_core.flc_wait);
}
 
key_put(vnode->lock_key);
@@ -464,14 +464,14 @@ static int afs_do_setlk(struct file *file, struct 
file_lock *fl)
 
_enter("{%llx:%llu},%llu-%llu,%u,%u",
   vnode->fid.vid, vnode->fid.vnode,
-  fl->fl_start, fl->fl_end, fl->fl_type, mode);
+  fl->fl_start, fl->fl_end, fl->fl_core.flc_type, mode);
 
fl->fl_ops = _lock_ops;
INIT_LIST_HEAD(>fl_u.afs.link);
fl->fl_u.afs.state = AFS_LOCK_PENDING;
 
partial = (fl->fl_start != 0 || fl->fl_end != OFFSET_MAX);
-   type = (fl->fl_type == F_RDLCK) ? AFS_LOCK_READ : AFS_LOCK_WRITE;
+   type = (fl->fl_core.flc_type == F_RDLCK) ? AFS_LOCK_READ : 
AFS_LOCK_WRITE;
if (mode == afs_flock_mode_write && partial)
type = AFS_LOCK_WRITE;
 
@@ -524,7 +524,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
}
 
if (vnode->lock_state == AFS_VNODE_LOCK_NONE &&
-   !(fl->fl_flags & FL_SLEEP)) {
+   !(fl->fl_core.flc_flags & FL_SLEEP)) {
ret = -EAGAIN;
if (type == AFS_LOCK_READ) {
if (vnode->status.lock_count == -1)
@@ -621,7 +621,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
return 0;
 
 lock_is_contended:
-   if (!(fl->fl_flags & FL_SLEEP)) {
+   if (!(fl->fl_core.flc_flags & FL_SLEEP)) {
list_del_init(>fl_u.afs.link);
afs_next_locker(vnode, 0);
ret = -EAGAIN;
@@ -641,7 +641,7 @@ static int afs_do_setlk(struct file *file, struct file_lock 
*fl)
spin_unlock(>lock);
 
trace_afs_flock_ev(vnode, fl, afs_flock_waiting, 0);
-   ret = wait_event_interruptible(fl->fl_wait,
+   ret = wait_event_interruptib

[PATCH v2 20/41] filelock: convert __locks_insert_block, conflict and deadlock checks to use file_lock_core

2024-01-25 Thread Jeff Layton
Have both __locks_insert_block and the deadlock and conflict checking
functions take a struct file_lock_core pointer instead of a struct
file_lock one. Also, change posix_locks_deadlock to return bool.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 134 +
 1 file changed, 73 insertions(+), 61 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index fb113103dc1b..a86841fc8220 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -757,39 +757,41 @@ EXPORT_SYMBOL(locks_delete_block);
  * waiters, and add beneath any waiter that blocks the new waiter.
  * Thus wakeups don't happen until needed.
  */
-static void __locks_insert_block(struct file_lock *blocker,
-struct file_lock *waiter,
-bool conflict(struct file_lock *,
-  struct file_lock *))
+static void __locks_insert_block(struct file_lock *blocker_fl,
+struct file_lock *waiter_fl,
+bool conflict(struct file_lock_core *,
+  struct file_lock_core *))
 {
-   struct file_lock *fl;
-   BUG_ON(!list_empty(>fl_core.flc_blocked_member));
+   struct file_lock_core *blocker = _fl->fl_core;
+   struct file_lock_core *waiter = _fl->fl_core;
+   struct file_lock_core *flc;
 
+   BUG_ON(!list_empty(>flc_blocked_member));
 new_blocker:
-   list_for_each_entry(fl, >fl_core.flc_blocked_requests,
-   fl_core.flc_blocked_member)
-   if (conflict(fl, waiter)) {
-   blocker =  fl;
+   list_for_each_entry(flc, >flc_blocked_requests, 
flc_blocked_member)
+   if (conflict(flc, waiter)) {
+   blocker =  flc;
goto new_blocker;
}
-   waiter->fl_core.flc_blocker = blocker;
-   list_add_tail(>fl_core.flc_blocked_member,
- >fl_core.flc_blocked_requests);
-   if ((blocker->fl_core.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
-   locks_insert_global_blocked(>fl_core);
+   waiter->flc_blocker = file_lock(blocker);
+   list_add_tail(>flc_blocked_member,
+ >flc_blocked_requests);
 
-   /* The requests in waiter->fl_blocked are known to conflict with
+   if ((blocker->flc_flags & (FL_POSIX|FL_OFDLCK)) == (FL_POSIX|FL_OFDLCK))
+   locks_insert_global_blocked(waiter);
+
+   /* The requests in waiter->flc_blocked are known to conflict with
 * waiter, but might not conflict with blocker, or the requests
 * and lock which block it.  So they all need to be woken.
 */
-   __locks_wake_up_blocks(>fl_core);
+   __locks_wake_up_blocks(waiter);
 }
 
 /* Must be called with flc_lock held. */
 static void locks_insert_block(struct file_lock *blocker,
   struct file_lock *waiter,
-  bool conflict(struct file_lock *,
-struct file_lock *))
+  bool conflict(struct file_lock_core *,
+struct file_lock_core *))
 {
spin_lock(_lock_lock);
__locks_insert_block(blocker, waiter, conflict);
@@ -846,12 +848,12 @@ locks_delete_lock_ctx(struct file_lock *fl, struct 
list_head *dispose)
 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
  * checks for shared/exclusive status of overlapping locks.
  */
-static bool locks_conflict(struct file_lock *caller_fl,
-  struct file_lock *sys_fl)
+static bool locks_conflict(struct file_lock_core *caller_fl,
+  struct file_lock_core *sys_fl)
 {
-   if (sys_fl->fl_core.flc_type == F_WRLCK)
+   if (sys_fl->flc_type == F_WRLCK)
return true;
-   if (caller_fl->fl_core.flc_type == F_WRLCK)
+   if (caller_fl->flc_type == F_WRLCK)
return true;
return false;
 }
@@ -859,20 +861,23 @@ static bool locks_conflict(struct file_lock *caller_fl,
 /* Determine if lock sys_fl blocks lock caller_fl. POSIX specific
  * checking before calling the locks_conflict().
  */
-static bool posix_locks_conflict(struct file_lock *caller_fl,
-struct file_lock *sys_fl)
+static bool posix_locks_conflict(struct file_lock_core *caller_flc,
+struct file_lock_core *sys_flc)
 {
+   struct file_lock *caller_fl = file_lock(caller_flc);
+   struct file_lock *sys_fl = file_lock(sys_flc);
+
/* POSIX locks owned by the same process do not conflict with
 * each other.
 */
-   if (posix_same_owner(_fl->fl_core, _fl->fl_core))
+   if (posix_same_owner(caller_flc, sys_flc))
ret

[PATCH v2 26/41] filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx

2024-01-25 Thread Jeff Layton
Have these functions take a file_lock_core pointer instead of a
file_lock.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 44 ++--
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 03985cfb7eff..0491d621417d 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -824,28 +824,28 @@ static void locks_wake_up_blocks(struct file_lock_core 
*blocker)
 }
 
 static void
-locks_insert_lock_ctx(struct file_lock *fl, struct list_head *before)
+locks_insert_lock_ctx(struct file_lock_core *fl, struct list_head *before)
 {
-   list_add_tail(>fl_core.flc_list, before);
-   locks_insert_global_locks(>fl_core);
+   list_add_tail(>flc_list, before);
+   locks_insert_global_locks(fl);
 }
 
 static void
-locks_unlink_lock_ctx(struct file_lock *fl)
+locks_unlink_lock_ctx(struct file_lock_core *fl)
 {
-   locks_delete_global_locks(>fl_core);
-   list_del_init(>fl_core.flc_list);
-   locks_wake_up_blocks(>fl_core);
+   locks_delete_global_locks(fl);
+   list_del_init(>flc_list);
+   locks_wake_up_blocks(fl);
 }
 
 static void
-locks_delete_lock_ctx(struct file_lock *fl, struct list_head *dispose)
+locks_delete_lock_ctx(struct file_lock_core *fl, struct list_head *dispose)
 {
locks_unlink_lock_ctx(fl);
if (dispose)
-   list_add(>fl_core.flc_list, dispose);
+   list_add(>flc_list, dispose);
else
-   locks_free_lock(fl);
+   locks_free_lock(file_lock(fl));
 }
 
 /* Determine if lock sys_fl blocks lock caller_fl. Common functionality
@@ -1072,7 +1072,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
if (request->fl_core.flc_type == fl->fl_core.flc_type)
goto out;
found = true;
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>fl_core, );
break;
}
 
@@ -1097,7 +1097,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
goto out;
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
-   locks_insert_lock_ctx(new_fl, >flc_flock);
+   locks_insert_lock_ctx(_fl->fl_core, >flc_flock);
new_fl = NULL;
error = 0;
 
@@ -1236,7 +1236,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
else
request->fl_end = fl->fl_end;
if (added) {
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>fl_core, );
continue;
}
request = fl;
@@ -1265,7 +1265,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 * one (This may happen several times).
 */
if (added) {
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>fl_core, 
);
continue;
}
/*
@@ -1282,9 +1282,9 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
locks_move_blocks(new_fl, request);
request = new_fl;
new_fl = NULL;
-   locks_insert_lock_ctx(request,
+   locks_insert_lock_ctx(>fl_core,
  >fl_core.flc_list);
-   locks_delete_lock_ctx(fl, );
+   locks_delete_lock_ctx(>fl_core, );
added = true;
}
}
@@ -1313,7 +1313,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
}
locks_copy_lock(new_fl, request);
locks_move_blocks(new_fl, request);
-   locks_insert_lock_ctx(new_fl, >fl_core.flc_list);
+   locks_insert_lock_ctx(_fl->fl_core, >fl_core.flc_list);
fl = new_fl;
new_fl = NULL;
}
@@ -1325,7 +1325,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
left = new_fl2;
new_fl2 = NULL;
locks_copy_lock(left, right);
-   locks_insert_lock_ctx(left, >fl_core.flc_list);
+   locks_insert_lock_ctx(>fl_core, 
>fl_core.flc_list);
}
right->fl_start = request->fl_end + 1;
l

[PATCH v2 25/41] filelock: convert locks_wake_up_blocks to take a file_lock_core pointer

2024-01-25 Thread Jeff Layton
Have locks_wake_up_blocks take a file_lock_core pointer, and fix up the
callers to pass one in.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 6182f5c5e7b4..03985cfb7eff 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -806,7 +806,7 @@ static void locks_insert_block(struct file_lock_core 
*blocker,
  *
  * Must be called with the inode->flc_lock held!
  */
-static void locks_wake_up_blocks(struct file_lock *blocker)
+static void locks_wake_up_blocks(struct file_lock_core *blocker)
 {
/*
 * Avoid taking global lock if list is empty. This is safe since new
@@ -815,11 +815,11 @@ static void locks_wake_up_blocks(struct file_lock 
*blocker)
 * fl_blocked_requests list does not require the flc_lock, so we must
 * recheck list_empty() after acquiring the blocked_lock_lock.
 */
-   if (list_empty(>fl_core.flc_blocked_requests))
+   if (list_empty(>flc_blocked_requests))
return;
 
spin_lock(_lock_lock);
-   __locks_wake_up_blocks(>fl_core);
+   __locks_wake_up_blocks(blocker);
spin_unlock(_lock_lock);
 }
 
@@ -835,7 +835,7 @@ locks_unlink_lock_ctx(struct file_lock *fl)
 {
locks_delete_global_locks(>fl_core);
list_del_init(>fl_core.flc_list);
-   locks_wake_up_blocks(fl);
+   locks_wake_up_blocks(>fl_core);
 }
 
 static void
@@ -1328,11 +1328,11 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
locks_insert_lock_ctx(left, >fl_core.flc_list);
}
right->fl_start = request->fl_end + 1;
-   locks_wake_up_blocks(right);
+   locks_wake_up_blocks(>fl_core);
}
if (left) {
left->fl_end = request->fl_start - 1;
-   locks_wake_up_blocks(left);
+   locks_wake_up_blocks(>fl_core);
}
  out:
spin_unlock(>flc_lock);
@@ -1414,7 +1414,7 @@ int lease_modify(struct file_lock *fl, int arg, struct 
list_head *dispose)
if (error)
return error;
lease_clear_pending(fl, arg);
-   locks_wake_up_blocks(fl);
+   locks_wake_up_blocks(>fl_core);
if (arg == F_UNLCK) {
struct file *filp = fl->fl_core.flc_file;
 

-- 
2.43.0




[PATCH v2 24/41] filelock: make assign_type helper take a file_lock_core pointer

2024-01-25 Thread Jeff Layton
Have assign_type take struct file_lock_core instead of file_lock.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 647a778d2c85..6182f5c5e7b4 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -439,13 +439,13 @@ static void flock_make_lock(struct file *filp, struct 
file_lock *fl, int type)
fl->fl_end = OFFSET_MAX;
 }
 
-static int assign_type(struct file_lock *fl, int type)
+static int assign_type(struct file_lock_core *flc, int type)
 {
switch (type) {
case F_RDLCK:
case F_WRLCK:
case F_UNLCK:
-   fl->fl_core.flc_type = type;
+   flc->flc_type = type;
break;
default:
return -EINVAL;
@@ -497,7 +497,7 @@ static int flock64_to_posix_lock(struct file *filp, struct 
file_lock *fl,
fl->fl_ops = NULL;
fl->fl_lmops = NULL;
 
-   return assign_type(fl, l->l_type);
+   return assign_type(>fl_core, l->l_type);
 }
 
 /* Verify a "struct flock" and copy it to a "struct file_lock" as a POSIX
@@ -552,7 +552,7 @@ static const struct lock_manager_operations 
lease_manager_ops = {
  */
 static int lease_init(struct file *filp, int type, struct file_lock *fl)
 {
-   if (assign_type(fl, type) != 0)
+   if (assign_type(>fl_core, type) != 0)
return -EINVAL;
 
fl->fl_core.flc_owner = filp;
@@ -1409,7 +1409,7 @@ static void lease_clear_pending(struct file_lock *fl, int 
arg)
 /* We already had a lease on this file; just change its type */
 int lease_modify(struct file_lock *fl, int arg, struct list_head *dispose)
 {
-   int error = assign_type(fl, arg);
+   int error = assign_type(>fl_core, arg);
 
if (error)
return error;

-- 
2.43.0




[PATCH v2 23/41] filelock: reorganize locks_delete_block and __locks_insert_block

2024-01-25 Thread Jeff Layton
Rename the old __locks_delete_block to __locks_unlink_lock. Rename
change old locks_delete_block function to __locks_delete_block and
have it take a file_lock_core. Make locks_delete_block a simple wrapper
around __locks_delete_block.

Also, change __locks_insert_block to take struct file_lock_core, and
fix up its callers.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 42 ++
 1 file changed, 22 insertions(+), 20 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 739af36d98df..647a778d2c85 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -659,7 +659,7 @@ static void locks_delete_global_blocked(struct 
file_lock_core *waiter)
  *
  * Must be called with blocked_lock_lock held.
  */
-static void __locks_delete_block(struct file_lock_core *waiter)
+static void __locks_unlink_block(struct file_lock_core *waiter)
 {
locks_delete_global_blocked(waiter);
list_del_init(>flc_blocked_member);
@@ -675,7 +675,7 @@ static void __locks_wake_up_blocks(struct file_lock_core 
*blocker)
  struct file_lock_core, 
flc_blocked_member);
 
fl = file_lock(waiter);
-   __locks_delete_block(waiter);
+   __locks_unlink_block(waiter);
if ((waiter->flc_flags & (FL_POSIX | FL_FLOCK)) &&
fl->fl_lmops && fl->fl_lmops->lm_notify)
fl->fl_lmops->lm_notify(fl);
@@ -691,16 +691,9 @@ static void __locks_wake_up_blocks(struct file_lock_core 
*blocker)
}
 }
 
-/**
- * locks_delete_block - stop waiting for a file lock
- * @waiter: the lock which was waiting
- *
- * lockd/nfsd need to disconnect the lock while working on it.
- */
-int locks_delete_block(struct file_lock *waiter_fl)
+static int __locks_delete_block(struct file_lock_core *waiter)
 {
int status = -ENOENT;
-   struct file_lock_core *waiter = _fl->fl_core;
 
/*
 * If fl_blocker is NULL, it won't be set again as this thread "owns"
@@ -731,7 +724,7 @@ int locks_delete_block(struct file_lock *waiter_fl)
if (waiter->flc_blocker)
status = 0;
__locks_wake_up_blocks(waiter);
-   __locks_delete_block(waiter);
+   __locks_unlink_block(waiter);
 
/*
 * The setting of fl_blocker to NULL marks the "done" point in deleting
@@ -741,6 +734,17 @@ int locks_delete_block(struct file_lock *waiter_fl)
spin_unlock(_lock_lock);
return status;
 }
+
+/**
+ * locks_delete_block - stop waiting for a file lock
+ * @waiter: the lock which was waiting
+ *
+ * lockd/nfsd need to disconnect the lock while working on it.
+ */
+int locks_delete_block(struct file_lock *waiter)
+{
+   return __locks_delete_block(>fl_core);
+}
 EXPORT_SYMBOL(locks_delete_block);
 
 /* Insert waiter into blocker's block list.
@@ -758,13 +762,11 @@ EXPORT_SYMBOL(locks_delete_block);
  * waiters, and add beneath any waiter that blocks the new waiter.
  * Thus wakeups don't happen until needed.
  */
-static void __locks_insert_block(struct file_lock *blocker_fl,
-struct file_lock *waiter_fl,
+static void __locks_insert_block(struct file_lock_core *blocker,
+struct file_lock_core *waiter,
 bool conflict(struct file_lock_core *,
   struct file_lock_core *))
 {
-   struct file_lock_core *blocker = _fl->fl_core;
-   struct file_lock_core *waiter = _fl->fl_core;
struct file_lock_core *flc;
 
BUG_ON(!list_empty(>flc_blocked_member));
@@ -789,8 +791,8 @@ static void __locks_insert_block(struct file_lock 
*blocker_fl,
 }
 
 /* Must be called with flc_lock held. */
-static void locks_insert_block(struct file_lock *blocker,
-  struct file_lock *waiter,
+static void locks_insert_block(struct file_lock_core *blocker,
+  struct file_lock_core *waiter,
   bool conflict(struct file_lock_core *,
 struct file_lock_core *))
 {
@@ -1088,7 +1090,7 @@ static int flock_lock_inode(struct inode *inode, struct 
file_lock *request)
if (!(request->fl_core.flc_flags & FL_SLEEP))
goto out;
error = FILE_LOCK_DEFERRED;
-   locks_insert_block(fl, request, flock_locks_conflict);
+   locks_insert_block(>fl_core, >fl_core, 
flock_locks_conflict);
goto out;
}
if (request->fl_core.flc_flags & FL_ACCESS)
@@ -1182,7 +1184,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
__locks_wake_up_blocks(>fl_core);
if (likely(!posix_locks_deadlock(request, fl))) {
  

[PATCH v2 19/41] filelock: make __locks_delete_block and __locks_wake_up_blocks take file_lock_core

2024-01-25 Thread Jeff Layton
Convert __locks_delete_block and __locks_wake_up_blocks to take a struct
file_lock_core pointer.

While we could do this in another way, we're going to need to add a
file_lock() helper function later anyway, so introduce and use it now.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 45 +++--
 1 file changed, 27 insertions(+), 18 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index d6d47612527c..fb113103dc1b 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -69,6 +69,11 @@
 
 #include 
 
+static struct file_lock *file_lock(struct file_lock_core *flc)
+{
+   return container_of(flc, struct file_lock, fl_core);
+}
+
 static bool lease_breaking(struct file_lock *fl)
 {
return fl->fl_core.flc_flags & (FL_UNLOCK_PENDING | 
FL_DOWNGRADE_PENDING);
@@ -654,31 +659,35 @@ static void locks_delete_global_blocked(struct 
file_lock_core *waiter)
  *
  * Must be called with blocked_lock_lock held.
  */
-static void __locks_delete_block(struct file_lock *waiter)
+static void __locks_delete_block(struct file_lock_core *waiter)
 {
-   locks_delete_global_blocked(>fl_core);
-   list_del_init(>fl_core.flc_blocked_member);
+   locks_delete_global_blocked(waiter);
+   list_del_init(>flc_blocked_member);
 }
 
-static void __locks_wake_up_blocks(struct file_lock *blocker)
+static void __locks_wake_up_blocks(struct file_lock_core *blocker)
 {
-   while (!list_empty(>fl_core.flc_blocked_requests)) {
-   struct file_lock *waiter;
+   while (!list_empty(>flc_blocked_requests)) {
+   struct file_lock_core *waiter;
+   struct file_lock *fl;
+
+   waiter = list_first_entry(>flc_blocked_requests,
+ struct file_lock_core, 
flc_blocked_member);
 
-   waiter = 
list_first_entry(>fl_core.flc_blocked_requests,
- struct file_lock, 
fl_core.flc_blocked_member);
+   fl = file_lock(waiter);
__locks_delete_block(waiter);
-   if (waiter->fl_lmops && waiter->fl_lmops->lm_notify)
-   waiter->fl_lmops->lm_notify(waiter);
+   if ((waiter->flc_flags & (FL_POSIX | FL_FLOCK)) &&
+   fl->fl_lmops && fl->fl_lmops->lm_notify)
+   fl->fl_lmops->lm_notify(fl);
else
-   wake_up(>fl_core.flc_wait);
+   wake_up(>flc_wait);
 
/*
-* The setting of fl_blocker to NULL marks the "done"
+* The setting of flc_blocker to NULL marks the "done"
 * point in deleting a block. Paired with acquire at the top
 * of locks_delete_block().
 */
-   smp_store_release(>fl_core.flc_blocker, NULL);
+   smp_store_release(>flc_blocker, NULL);
}
 }
 
@@ -720,8 +729,8 @@ int locks_delete_block(struct file_lock *waiter)
spin_lock(_lock_lock);
if (waiter->fl_core.flc_blocker)
status = 0;
-   __locks_wake_up_blocks(waiter);
-   __locks_delete_block(waiter);
+   __locks_wake_up_blocks(>fl_core);
+   __locks_delete_block(>fl_core);
 
/*
 * The setting of fl_blocker to NULL marks the "done" point in deleting
@@ -773,7 +782,7 @@ static void __locks_insert_block(struct file_lock *blocker,
 * waiter, but might not conflict with blocker, or the requests
 * and lock which block it.  So they all need to be woken.
 */
-   __locks_wake_up_blocks(waiter);
+   __locks_wake_up_blocks(>fl_core);
 }
 
 /* Must be called with flc_lock held. */
@@ -805,7 +814,7 @@ static void locks_wake_up_blocks(struct file_lock *blocker)
return;
 
spin_lock(_lock_lock);
-   __locks_wake_up_blocks(blocker);
+   __locks_wake_up_blocks(>fl_core);
spin_unlock(_lock_lock);
 }
 
@@ -1159,7 +1168,7 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 * Ensure that we don't find any locks blocked on this
 * request during deadlock detection.
 */
-   __locks_wake_up_blocks(request);
+   __locks_wake_up_blocks(>fl_core);
if (likely(!posix_locks_deadlock(request, fl))) {
error = FILE_LOCK_DEFERRED;
__locks_insert_block(fl, request,

-- 
2.43.0




[PATCH v2 18/41] filelock: convert locks_{insert,delete}_global_blocked

2024-01-25 Thread Jeff Layton
Have locks_insert_global_blocked and locks_delete_global_blocked take a
struct file_lock_core pointer.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 13 ++---
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index ad4bb9cd4c9d..d6d47612527c 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -635,19 +635,18 @@ posix_owner_key(struct file_lock_core *flc)
return (unsigned long) flc->flc_owner;
 }
 
-static void locks_insert_global_blocked(struct file_lock *waiter)
+static void locks_insert_global_blocked(struct file_lock_core *waiter)
 {
lockdep_assert_held(_lock_lock);
 
-   hash_add(blocked_hash, >fl_core.flc_link,
-posix_owner_key(>fl_core));
+   hash_add(blocked_hash, >flc_link, posix_owner_key(waiter));
 }
 
-static void locks_delete_global_blocked(struct file_lock *waiter)
+static void locks_delete_global_blocked(struct file_lock_core *waiter)
 {
lockdep_assert_held(_lock_lock);
 
-   hash_del(>fl_core.flc_link);
+   hash_del(>flc_link);
 }
 
 /* Remove waiter from blocker's block list.
@@ -657,7 +656,7 @@ static void locks_delete_global_blocked(struct file_lock 
*waiter)
  */
 static void __locks_delete_block(struct file_lock *waiter)
 {
-   locks_delete_global_blocked(waiter);
+   locks_delete_global_blocked(>fl_core);
list_del_init(>fl_core.flc_blocked_member);
 }
 
@@ -768,7 +767,7 @@ static void __locks_insert_block(struct file_lock *blocker,
list_add_tail(>fl_core.flc_blocked_member,
  >fl_core.flc_blocked_requests);
if ((blocker->fl_core.flc_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
-   locks_insert_global_blocked(waiter);
+   locks_insert_global_blocked(>fl_core);
 
/* The requests in waiter->fl_blocked are known to conflict with
 * waiter, but might not conflict with blocker, or the requests

-- 
2.43.0




[PATCH v2 16/41] filelock: convert posix_owner_key to take file_lock_core arg

2024-01-25 Thread Jeff Layton
Convert posix_owner_key to take struct file_lock_core pointer, and fix
up the callers to pass one in.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index bd0cfee230ae..effe84f954f9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -630,9 +630,9 @@ static void locks_delete_global_locks(struct file_lock *fl)
 }
 
 static unsigned long
-posix_owner_key(struct file_lock *fl)
+posix_owner_key(struct file_lock_core *flc)
 {
-   return (unsigned long) fl->fl_core.flc_owner;
+   return (unsigned long) flc->flc_owner;
 }
 
 static void locks_insert_global_blocked(struct file_lock *waiter)
@@ -640,7 +640,7 @@ static void locks_insert_global_blocked(struct file_lock 
*waiter)
lockdep_assert_held(_lock_lock);
 
hash_add(blocked_hash, >fl_core.flc_link,
-posix_owner_key(waiter));
+posix_owner_key(>fl_core));
 }
 
 static void locks_delete_global_blocked(struct file_lock *waiter)
@@ -977,7 +977,7 @@ static struct file_lock *what_owner_is_waiting_for(struct 
file_lock *block_fl)
 {
struct file_lock *fl;
 
-   hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, 
posix_owner_key(block_fl)) {
+   hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, 
posix_owner_key(_fl->fl_core)) {
if (posix_same_owner(>fl_core, _fl->fl_core)) {
while (fl->fl_core.flc_blocker)
fl = fl->fl_core.flc_blocker;

-- 
2.43.0




[PATCH v2 15/41] filelock: make posix_same_owner take file_lock_core pointers

2024-01-25 Thread Jeff Layton
Change posix_same_owner to take struct file_lock_core pointers, and
convert the callers to pass those in.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index a0d6fc0e043a..bd0cfee230ae 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -590,9 +590,9 @@ static inline int locks_overlap(struct file_lock *fl1, 
struct file_lock *fl2)
 /*
  * Check whether two locks have the same owner.
  */
-static int posix_same_owner(struct file_lock *fl1, struct file_lock *fl2)
+static int posix_same_owner(struct file_lock_core *fl1, struct file_lock_core 
*fl2)
 {
-   return fl1->fl_core.flc_owner == fl2->fl_core.flc_owner;
+   return fl1->flc_owner == fl2->flc_owner;
 }
 
 /* Must be called with the flc_lock held! */
@@ -857,7 +857,7 @@ static bool posix_locks_conflict(struct file_lock 
*caller_fl,
/* POSIX locks owned by the same process do not conflict with
 * each other.
 */
-   if (posix_same_owner(caller_fl, sys_fl))
+   if (posix_same_owner(_fl->fl_core, _fl->fl_core))
return false;
 
/* Check whether they overlap */
@@ -875,7 +875,7 @@ static bool posix_test_locks_conflict(struct file_lock 
*caller_fl,
 {
/* F_UNLCK checks any locks on the same fd. */
if (caller_fl->fl_core.flc_type == F_UNLCK) {
-   if (!posix_same_owner(caller_fl, sys_fl))
+   if (!posix_same_owner(_fl->fl_core, _fl->fl_core))
return false;
return locks_overlap(caller_fl, sys_fl);
}
@@ -978,7 +978,7 @@ static struct file_lock *what_owner_is_waiting_for(struct 
file_lock *block_fl)
struct file_lock *fl;
 
hash_for_each_possible(blocked_hash, fl, fl_core.flc_link, 
posix_owner_key(block_fl)) {
-   if (posix_same_owner(fl, block_fl)) {
+   if (posix_same_owner(>fl_core, _fl->fl_core)) {
while (fl->fl_core.flc_blocker)
fl = fl->fl_core.flc_blocker;
return fl;
@@ -1005,7 +1005,7 @@ static int posix_locks_deadlock(struct file_lock 
*caller_fl,
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
if (i++ > MAX_DEADLK_ITERATIONS)
return 0;
-   if (posix_same_owner(caller_fl, block_fl))
+   if (posix_same_owner(_fl->fl_core, _fl->fl_core))
return 1;
}
return 0;
@@ -1178,13 +1178,13 @@ static int posix_lock_inode(struct inode *inode, struct 
file_lock *request,
 
/* Find the first old lock with the same owner as the new lock */
list_for_each_entry(fl, >flc_posix, fl_core.flc_list) {
-   if (posix_same_owner(request, fl))
+   if (posix_same_owner(>fl_core, >fl_core))
break;
}
 
/* Process locks with this owner. */
list_for_each_entry_safe_from(fl, tmp, >flc_posix, 
fl_core.flc_list) {
-   if (!posix_same_owner(request, fl))
+   if (!posix_same_owner(>fl_core, >fl_core))
break;
 
/* Detect adjacent or overlapping regions (if same lock type) */

-- 
2.43.0




[PATCH v2 14/41] filelock: convert more internal functions to use file_lock_core

2024-01-25 Thread Jeff Layton
Convert more internal fs/locks.c functions to take and deal with struct
file_lock_core instead of struct file_lock:

- locks_dump_ctx_list
- locks_check_ctx_file_list
- locks_release_private
- locks_owner_has_blockers

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 51 +--
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 3a91515dbccd..a0d6fc0e043a 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -197,13 +197,12 @@ locks_get_lock_context(struct inode *inode, int type)
 static void
 locks_dump_ctx_list(struct list_head *list, char *list_type)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
 
-   list_for_each_entry(fl, list, fl_core.flc_list) {
-   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n", list_type,
-   fl->fl_core.flc_owner, fl->fl_core.flc_flags,
-   fl->fl_core.flc_type, fl->fl_core.flc_pid);
-   }
+   list_for_each_entry(flc, list, flc_list)
+   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n",
+   list_type, flc->flc_owner, flc->flc_flags,
+   flc->flc_type, flc->flc_pid);
 }
 
 static void
@@ -224,20 +223,19 @@ locks_check_ctx_lists(struct inode *inode)
 }
 
 static void
-locks_check_ctx_file_list(struct file *filp, struct list_head *list,
-   char *list_type)
+locks_check_ctx_file_list(struct file *filp, struct list_head *list, char 
*list_type)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
struct inode *inode = file_inode(filp);
 
-   list_for_each_entry(fl, list, fl_core.flc_list)
-   if (fl->fl_core.flc_file == filp)
+   list_for_each_entry(flc, list, flc_list)
+   if (flc->flc_file == filp)
pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
" fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n",
list_type, MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
-   fl->fl_core.flc_owner, fl->fl_core.flc_flags,
-   fl->fl_core.flc_type, fl->fl_core.flc_pid);
+   flc->flc_owner, flc->flc_flags,
+   flc->flc_type, flc->flc_pid);
 }
 
 void
@@ -274,11 +272,13 @@ EXPORT_SYMBOL_GPL(locks_alloc_lock);
 
 void locks_release_private(struct file_lock *fl)
 {
-   BUG_ON(waitqueue_active(>fl_core.flc_wait));
-   BUG_ON(!list_empty(>fl_core.flc_list));
-   BUG_ON(!list_empty(>fl_core.flc_blocked_requests));
-   BUG_ON(!list_empty(>fl_core.flc_blocked_member));
-   BUG_ON(!hlist_unhashed(>fl_core.flc_link));
+   struct file_lock_core *flc = >fl_core;
+
+   BUG_ON(waitqueue_active(>flc_wait));
+   BUG_ON(!list_empty(>flc_list));
+   BUG_ON(!list_empty(>flc_blocked_requests));
+   BUG_ON(!list_empty(>flc_blocked_member));
+   BUG_ON(!hlist_unhashed(>flc_link));
 
if (fl->fl_ops) {
if (fl->fl_ops->fl_release_private)
@@ -288,8 +288,8 @@ void locks_release_private(struct file_lock *fl)
 
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_put_owner) {
-   fl->fl_lmops->lm_put_owner(fl->fl_core.flc_owner);
-   fl->fl_core.flc_owner = NULL;
+   fl->fl_lmops->lm_put_owner(flc->flc_owner);
+   flc->flc_owner = NULL;
}
fl->fl_lmops = NULL;
}
@@ -305,16 +305,15 @@ EXPORT_SYMBOL_GPL(locks_release_private);
  *   %true: @owner has at least one blocker
  *   %false: @owner has no blockers
  */
-bool locks_owner_has_blockers(struct file_lock_context *flctx,
-   fl_owner_t owner)
+bool locks_owner_has_blockers(struct file_lock_context *flctx, fl_owner_t 
owner)
 {
-   struct file_lock *fl;
+   struct file_lock_core *flc;
 
spin_lock(>flc_lock);
-   list_for_each_entry(fl, >flc_posix, fl_core.flc_list) {
-   if (fl->fl_core.flc_owner != owner)
+   list_for_each_entry(flc, >flc_posix, flc_list) {
+   if (flc->flc_owner != owner)
continue;
-   if (!list_empty(>fl_core.flc_blocked_requests)) {
+   if (!list_empty(>flc_blocked_requests)) {
spin_unlock(>flc_lock);
return true;
}

-- 
2.43.0




[PATCH v2 13/41] filelock: convert some internal functions to use file_lock_core instead

2024-01-25 Thread Jeff Layton
Convert some internal fs/locks.c function to take and deal with struct
file_lock_core instead of struct file_lock:

- locks_init_lock_heads
- locks_alloc_lock
- locks_init_lock

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index b06fa4dea298..3a91515dbccd 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -251,13 +251,13 @@ locks_free_lock_context(struct inode *inode)
}
 }
 
-static void locks_init_lock_heads(struct file_lock *fl)
+static void locks_init_lock_heads(struct file_lock_core *flc)
 {
-   INIT_HLIST_NODE(>fl_core.flc_link);
-   INIT_LIST_HEAD(>fl_core.flc_list);
-   INIT_LIST_HEAD(>fl_core.flc_blocked_requests);
-   INIT_LIST_HEAD(>fl_core.flc_blocked_member);
-   init_waitqueue_head(>fl_core.flc_wait);
+   INIT_HLIST_NODE(>flc_link);
+   INIT_LIST_HEAD(>flc_list);
+   INIT_LIST_HEAD(>flc_blocked_requests);
+   INIT_LIST_HEAD(>flc_blocked_member);
+   init_waitqueue_head(>flc_wait);
 }
 
 /* Allocate an empty lock structure. */
@@ -266,7 +266,7 @@ struct file_lock *locks_alloc_lock(void)
struct file_lock *fl = kmem_cache_zalloc(filelock_cache, GFP_KERNEL);
 
if (fl)
-   locks_init_lock_heads(fl);
+   locks_init_lock_heads(>fl_core);
 
return fl;
 }
@@ -347,7 +347,7 @@ locks_dispose_list(struct list_head *dispose)
 void locks_init_lock(struct file_lock *fl)
 {
memset(fl, 0, sizeof(struct file_lock));
-   locks_init_lock_heads(fl);
+   locks_init_lock_heads(>fl_core);
 }
 EXPORT_SYMBOL(locks_init_lock);
 

-- 
2.43.0




[PATCH v2 12/41] filelock: have fs/locks.c deal with file_lock_core directly

2024-01-25 Thread Jeff Layton
Convert fs/locks.c to access fl_core fields direcly rather than using
the backward-compatability macros. Most of this was done with
coccinelle, with a few by-hand fixups.

Signed-off-by: Jeff Layton 
---
 fs/locks.c  | 479 
 include/trace/events/filelock.h |  32 +--
 2 files changed, 260 insertions(+), 251 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index cee3f183a872..b06fa4dea298 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -48,8 +48,6 @@
  * children.
  *
  */
-#define _NEED_FILE_LOCK_FIELD_MACROS
-
 #include 
 #include 
 #include 
@@ -73,16 +71,16 @@
 
 static bool lease_breaking(struct file_lock *fl)
 {
-   return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
+   return fl->fl_core.flc_flags & (FL_UNLOCK_PENDING | 
FL_DOWNGRADE_PENDING);
 }
 
 static int target_leasetype(struct file_lock *fl)
 {
-   if (fl->fl_flags & FL_UNLOCK_PENDING)
+   if (fl->fl_core.flc_flags & FL_UNLOCK_PENDING)
return F_UNLCK;
-   if (fl->fl_flags & FL_DOWNGRADE_PENDING)
+   if (fl->fl_core.flc_flags & FL_DOWNGRADE_PENDING)
return F_RDLCK;
-   return fl->fl_type;
+   return fl->fl_core.flc_type;
 }
 
 static int leases_enable = 1;
@@ -201,8 +199,10 @@ locks_dump_ctx_list(struct list_head *list, char 
*list_type)
 {
struct file_lock *fl;
 
-   list_for_each_entry(fl, list, fl_list) {
-   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n", list_type, fl->fl_owner, fl->fl_flags, fl->fl_type, fl->fl_pid);
+   list_for_each_entry(fl, list, fl_core.flc_list) {
+   pr_warn("%s: fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n", list_type,
+   fl->fl_core.flc_owner, fl->fl_core.flc_flags,
+   fl->fl_core.flc_type, fl->fl_core.flc_pid);
}
 }
 
@@ -230,13 +230,14 @@ locks_check_ctx_file_list(struct file *filp, struct 
list_head *list,
struct file_lock *fl;
struct inode *inode = file_inode(filp);
 
-   list_for_each_entry(fl, list, fl_list)
-   if (fl->fl_file == filp)
+   list_for_each_entry(fl, list, fl_core.flc_list)
+   if (fl->fl_core.flc_file == filp)
pr_warn("Leaked %s lock on dev=0x%x:0x%x ino=0x%lx "
" fl_owner=%p fl_flags=0x%x fl_type=0x%x 
fl_pid=%u\n",
list_type, MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino,
-   fl->fl_owner, fl->fl_flags, fl->fl_type, 
fl->fl_pid);
+   fl->fl_core.flc_owner, fl->fl_core.flc_flags,
+   fl->fl_core.flc_type, fl->fl_core.flc_pid);
 }
 
 void
@@ -252,11 +253,11 @@ locks_free_lock_context(struct inode *inode)
 
 static void locks_init_lock_heads(struct file_lock *fl)
 {
-   INIT_HLIST_NODE(>fl_link);
-   INIT_LIST_HEAD(>fl_list);
-   INIT_LIST_HEAD(>fl_blocked_requests);
-   INIT_LIST_HEAD(>fl_blocked_member);
-   init_waitqueue_head(>fl_wait);
+   INIT_HLIST_NODE(>fl_core.flc_link);
+   INIT_LIST_HEAD(>fl_core.flc_list);
+   INIT_LIST_HEAD(>fl_core.flc_blocked_requests);
+   INIT_LIST_HEAD(>fl_core.flc_blocked_member);
+   init_waitqueue_head(>fl_core.flc_wait);
 }
 
 /* Allocate an empty lock structure. */
@@ -273,11 +274,11 @@ EXPORT_SYMBOL_GPL(locks_alloc_lock);
 
 void locks_release_private(struct file_lock *fl)
 {
-   BUG_ON(waitqueue_active(>fl_wait));
-   BUG_ON(!list_empty(>fl_list));
-   BUG_ON(!list_empty(>fl_blocked_requests));
-   BUG_ON(!list_empty(>fl_blocked_member));
-   BUG_ON(!hlist_unhashed(>fl_link));
+   BUG_ON(waitqueue_active(>fl_core.flc_wait));
+   BUG_ON(!list_empty(>fl_core.flc_list));
+   BUG_ON(!list_empty(>fl_core.flc_blocked_requests));
+   BUG_ON(!list_empty(>fl_core.flc_blocked_member));
+   BUG_ON(!hlist_unhashed(>fl_core.flc_link));
 
if (fl->fl_ops) {
if (fl->fl_ops->fl_release_private)
@@ -287,8 +288,8 @@ void locks_release_private(struct file_lock *fl)
 
if (fl->fl_lmops) {
if (fl->fl_lmops->lm_put_owner) {
-   fl->fl_lmops->lm_put_owner(fl->fl_owner);
-   fl->fl_owner = NULL;
+   fl->fl_lmops->lm_put_owner(fl->fl_core.flc_owner);
+   fl->fl_core.flc_owner = NULL;
}
fl->fl_lmops = NULL;
}
@@ -310,10 +311,10 @@ bool locks_owner_has_blockers(struct file_lock_context 
*flctx,
struct file_lock *fl;
 
spin_l

[PATCH v2 11/41] filelock: add coccinelle scripts to move fields to struct file_lock_core

2024-01-25 Thread Jeff Layton
This patch creates two ".cocci" semantic patches in a top level cocci/
directory. These patches were used to help generate several of the
following patches. We can drop this patch or move the files to a more
appropriate location before merging.

Signed-off-by: Jeff Layton 
---
 cocci/filelock.cocci | 88 
 cocci/nlm.cocci  | 81 +++
 2 files changed, 169 insertions(+)

diff --git a/cocci/filelock.cocci b/cocci/filelock.cocci
new file mode 100644
index ..93fb4ed8341a
--- /dev/null
+++ b/cocci/filelock.cocci
@@ -0,0 +1,88 @@
+@@
+struct file_lock *fl;
+@@
+(
+- fl->fl_blocker
++ fl->fl_core.flc_blocker
+|
+- fl->fl_list
++ fl->fl_core.flc_list
+|
+- fl->fl_link
++ fl->fl_core.flc_link
+|
+- fl->fl_blocked_requests
++ fl->fl_core.flc_blocked_requests
+|
+- fl->fl_blocked_member
++ fl->fl_core.flc_blocked_member
+|
+- fl->fl_owner
++ fl->fl_core.flc_owner
+|
+- fl->fl_flags
++ fl->fl_core.flc_flags
+|
+- fl->fl_type
++ fl->fl_core.flc_type
+|
+- fl->fl_pid
++ fl->fl_core.flc_pid
+|
+- fl->fl_link_cpu
++ fl->fl_core.flc_link_cpu
+|
+- fl->fl_wait
++ fl->fl_core.flc_wait
+|
+- fl->fl_file
++ fl->fl_core.flc_file
+)
+
+@@
+struct file_lock fl;
+@@
+(
+- fl.fl_blocker
++ fl.fl_core.flc_blocker
+|
+- fl.fl_list
++ fl.fl_core.flc_list
+|
+- fl.fl_link
++ fl.fl_core.flc_link
+|
+- fl.fl_blocked_requests
++ fl.fl_core.flc_blocked_requests
+|
+- fl.fl_blocked_member
++ fl.fl_core.flc_blocked_member
+|
+- fl.fl_owner
++ fl.fl_core.flc_owner
+|
+- fl.fl_flags
++ fl.fl_core.flc_flags
+|
+- fl.fl_type
++ fl.fl_core.flc_type
+|
+- fl.fl_pid
++ fl.fl_core.flc_pid
+|
+- fl.fl_link_cpu
++ fl.fl_core.flc_link_cpu
+|
+- fl.fl_wait
++ fl.fl_core.flc_wait
+|
+- fl.fl_file
++ fl.fl_core.flc_file
+)
+
+@@
+struct file_lock *fl;
+struct list_head *li;
+@@
+- list_for_each_entry(fl, li, fl_list)
++ list_for_each_entry(fl, li, fl_core.flc_list)
diff --git a/cocci/nlm.cocci b/cocci/nlm.cocci
new file mode 100644
index ..bf22f0a75812
--- /dev/null
+++ b/cocci/nlm.cocci
@@ -0,0 +1,81 @@
+@@
+struct nlm_lock *nlck;
+@@
+(
+- nlck->fl.fl_blocker
++ nlck->fl.fl_core.flc_blocker
+|
+- nlck->fl.fl_list
++ nlck->fl.fl_core.flc_list
+|
+- nlck->fl.fl_link
++ nlck->fl.fl_core.flc_link
+|
+- nlck->fl.fl_blocked_requests
++ nlck->fl.fl_core.flc_blocked_requests
+|
+- nlck->fl.fl_blocked_member
++ nlck->fl.fl_core.flc_blocked_member
+|
+- nlck->fl.fl_owner
++ nlck->fl.fl_core.flc_owner
+|
+- nlck->fl.fl_flags
++ nlck->fl.fl_core.flc_flags
+|
+- nlck->fl.fl_type
++ nlck->fl.fl_core.flc_type
+|
+- nlck->fl.fl_pid
++ nlck->fl.fl_core.flc_pid
+|
+- nlck->fl.fl_link_cpu
++ nlck->fl.fl_core.flc_link_cpu
+|
+- nlck->fl.fl_wait
++ nlck->fl.fl_core.flc_wait
+|
+- nlck->fl.fl_file
++ nlck->fl.fl_core.flc_file
+)
+
+@@
+struct nlm_args *argp;
+@@
+(
+- argp->lock.fl.fl_blocker
++ argp->lock.fl.fl_core.flc_blocker
+|
+- argp->lock.fl.fl_list
++ argp->lock.fl.fl_core.flc_list
+|
+- argp->lock.fl.fl_link
++ argp->lock.fl.fl_core.flc_link
+|
+- argp->lock.fl.fl_blocked_requests
++ argp->lock.fl.fl_core.flc_blocked_requests
+|
+- argp->lock.fl.fl_blocked_member
++ argp->lock.fl.fl_core.flc_blocked_member
+|
+- argp->lock.fl.fl_owner
++ argp->lock.fl.fl_core.flc_owner
+|
+- argp->lock.fl.fl_flags
++ argp->lock.fl.fl_core.flc_flags
+|
+- argp->lock.fl.fl_type
++ argp->lock.fl.fl_core.flc_type
+|
+- argp->lock.fl.fl_pid
++ argp->lock.fl.fl_core.flc_pid
+|
+- argp->lock.fl.fl_link_cpu
++ argp->lock.fl.fl_core.flc_link_cpu
+|
+- argp->lock.fl.fl_wait
++ argp->lock.fl.fl_core.flc_wait
+|
+- argp->lock.fl.fl_file
++ argp->lock.fl.fl_core.flc_file
+)

-- 
2.43.0




[PATCH v2 05/41] nfsd: rename fl_type and fl_flags variables in nfsd4_lock

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce some macros with names that
clash with the variable names here. Rename them.

Signed-off-by: Jeff Layton 
---
 fs/nfsd/nfs4state.c | 24 
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
index 2fa54cfd4882..f66e67394157 100644
--- a/fs/nfsd/nfs4state.c
+++ b/fs/nfsd/nfs4state.c
@@ -7493,8 +7493,8 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
int lkflg;
int err;
bool new = false;
-   unsigned char fl_type;
-   unsigned int fl_flags = FL_POSIX;
+   unsigned char type;
+   unsigned int flags = FL_POSIX;
struct net *net = SVC_NET(rqstp);
struct nfsd_net *nn = net_generic(net, nfsd_net_id);
 
@@ -7557,14 +7557,14 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
goto out;
 
if (lock->lk_reclaim)
-   fl_flags |= FL_RECLAIM;
+   flags |= FL_RECLAIM;
 
fp = lock_stp->st_stid.sc_file;
switch (lock->lk_type) {
case NFS4_READW_LT:
if (nfsd4_has_session(cstate) ||
exportfs_lock_op_is_async(sb->s_export_op))
-   fl_flags |= FL_SLEEP;
+   flags |= FL_SLEEP;
fallthrough;
case NFS4_READ_LT:
spin_lock(>fi_lock);
@@ -7572,12 +7572,12 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
if (nf)
get_lock_access(lock_stp, 
NFS4_SHARE_ACCESS_READ);
spin_unlock(>fi_lock);
-   fl_type = F_RDLCK;
+   type = F_RDLCK;
break;
case NFS4_WRITEW_LT:
if (nfsd4_has_session(cstate) ||
exportfs_lock_op_is_async(sb->s_export_op))
-   fl_flags |= FL_SLEEP;
+   flags |= FL_SLEEP;
fallthrough;
case NFS4_WRITE_LT:
spin_lock(>fi_lock);
@@ -7585,7 +7585,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
if (nf)
get_lock_access(lock_stp, 
NFS4_SHARE_ACCESS_WRITE);
spin_unlock(>fi_lock);
-   fl_type = F_WRLCK;
+   type = F_WRLCK;
break;
default:
status = nfserr_inval;
@@ -7605,7 +7605,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
 * on those filesystems:
 */
if (!exportfs_lock_op_is_async(sb->s_export_op))
-   fl_flags &= ~FL_SLEEP;
+   flags &= ~FL_SLEEP;
 
nbl = find_or_allocate_block(lock_sop, >fi_fhandle, nn);
if (!nbl) {
@@ -7615,11 +7615,11 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
}
 
file_lock = >nbl_lock;
-   file_lock->fl_type = fl_type;
+   file_lock->fl_type = type;
file_lock->fl_owner = 
(fl_owner_t)lockowner(nfs4_get_stateowner(_sop->lo_owner));
file_lock->fl_pid = current->tgid;
file_lock->fl_file = nf->nf_file;
-   file_lock->fl_flags = fl_flags;
+   file_lock->fl_flags = flags;
file_lock->fl_lmops = _posix_mng_ops;
file_lock->fl_start = lock->lk_offset;
file_lock->fl_end = last_byte_offset(lock->lk_offset, lock->lk_length);
@@ -7632,7 +7632,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
goto out;
}
 
-   if (fl_flags & FL_SLEEP) {
+   if (flags & FL_SLEEP) {
nbl->nbl_time = ktime_get_boottime_seconds();
spin_lock(>blocked_locks_lock);
list_add_tail(>nbl_list, _sop->lo_blocked);
@@ -7669,7 +7669,7 @@ nfsd4_lock(struct svc_rqst *rqstp, struct 
nfsd4_compound_state *cstate,
 out:
if (nbl) {
/* dequeue it if we queued it before */
-   if (fl_flags & FL_SLEEP) {
+   if (flags & FL_SLEEP) {
spin_lock(>blocked_locks_lock);
if (!list_empty(>nbl_list) &&
!list_empty(>nbl_lru)) {

-- 
2.43.0




[PATCH v2 09/41] filelock: drop the IS_* macros

2024-01-25 Thread Jeff Layton
These don't add a lot of value over just open-coding the flag check.

Suggested-by: NeilBrown 
Signed-off-by: Jeff Layton 
---
 fs/locks.c | 32 +++-
 1 file changed, 15 insertions(+), 17 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index 1eceaa56e47f..87212f86eca9 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -70,12 +70,6 @@
 
 #include 
 
-#define IS_POSIX(fl)   (fl->fl_flags & FL_POSIX)
-#define IS_FLOCK(fl)   (fl->fl_flags & FL_FLOCK)
-#define IS_LEASE(fl)   (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT))
-#define IS_OFDLCK(fl)  (fl->fl_flags & FL_OFDLCK)
-#define IS_REMOTELCK(fl)   (fl->fl_pid <= 0)
-
 static bool lease_breaking(struct file_lock *fl)
 {
return fl->fl_flags & (FL_UNLOCK_PENDING | FL_DOWNGRADE_PENDING);
@@ -767,7 +761,7 @@ static void __locks_insert_block(struct file_lock *blocker,
}
waiter->fl_blocker = blocker;
list_add_tail(>fl_blocked_member, 
>fl_blocked_requests);
-   if (IS_POSIX(blocker) && !IS_OFDLCK(blocker))
+   if ((blocker->fl_flags & (FL_POSIX|FL_OFDLCK)) == FL_POSIX)
locks_insert_global_blocked(waiter);
 
/* The requests in waiter->fl_blocked are known to conflict with
@@ -999,7 +993,7 @@ static int posix_locks_deadlock(struct file_lock *caller_fl,
 * This deadlock detector can't reasonably detect deadlocks with
 * FL_OFDLCK locks, since they aren't owned by a process, per-se.
 */
-   if (IS_OFDLCK(caller_fl))
+   if (caller_fl->fl_flags & FL_OFDLCK)
return 0;
 
while ((block_fl = what_owner_is_waiting_for(block_fl))) {
@@ -2150,10 +2144,13 @@ static pid_t locks_translate_pid(struct file_lock *fl, 
struct pid_namespace *ns)
pid_t vnr;
struct pid *pid;
 
-   if (IS_OFDLCK(fl))
+   if (fl->fl_flags & FL_OFDLCK)
return -1;
-   if (IS_REMOTELCK(fl))
+
+   /* Remote locks report a negative pid value */
+   if (fl->fl_pid <= 0)
return fl->fl_pid;
+
/*
 * If the flock owner process is dead and its pid has been already
 * freed, the translation below won't work, but we still want to show
@@ -2697,7 +2694,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
struct inode *inode = NULL;
unsigned int pid;
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
-   int type;
+   int type = fl->fl_type;
 
pid = locks_translate_pid(fl, proc_pidns);
/*
@@ -2714,19 +2711,21 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
if (repeat)
seq_printf(f, "%*s", repeat - 1 + (int)strlen(pfx), pfx);
 
-   if (IS_POSIX(fl)) {
+   if (fl->fl_flags & FL_POSIX) {
if (fl->fl_flags & FL_ACCESS)
seq_puts(f, "ACCESS");
-   else if (IS_OFDLCK(fl))
+   else if (fl->fl_flags & FL_OFDLCK)
seq_puts(f, "OFDLCK");
else
seq_puts(f, "POSIX ");
 
seq_printf(f, " %s ",
 (inode == NULL) ? "*NOINODE*" : "ADVISORY ");
-   } else if (IS_FLOCK(fl)) {
+   } else if (fl->fl_flags & FL_FLOCK) {
seq_puts(f, "FLOCK  ADVISORY  ");
-   } else if (IS_LEASE(fl)) {
+   } else if (fl->fl_flags & (FL_LEASE|FL_DELEG|FL_LAYOUT)) {
+   type = target_leasetype(fl);
+
if (fl->fl_flags & FL_DELEG)
seq_puts(f, "DELEG  ");
else
@@ -2741,7 +2740,6 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
} else {
seq_puts(f, "UNKNOWN UNKNOWN  ");
}
-   type = IS_LEASE(fl) ? target_leasetype(fl) : fl->fl_type;
 
seq_printf(f, "%s ", (type == F_WRLCK) ? "WRITE" :
 (type == F_RDLCK) ? "READ" : "UNLCK");
@@ -2753,7 +2751,7 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
} else {
seq_printf(f, "%d :0 ", pid);
}
-   if (IS_POSIX(fl)) {
+   if (fl->fl_flags & FL_POSIX) {
if (fl->fl_end == OFFSET_MAX)
seq_printf(f, "%Ld EOF\n", fl->fl_start);
else

-- 
2.43.0




[PATCH v2 08/41] afs: rename fl_type variable in afs_next_locker

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce macros that conflict with the
variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/afs/flock.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/afs/flock.c b/fs/afs/flock.c
index 9c6dea3139f5..e7feaf66bddf 100644
--- a/fs/afs/flock.c
+++ b/fs/afs/flock.c
@@ -112,16 +112,16 @@ static void afs_next_locker(struct afs_vnode *vnode, int 
error)
 {
struct file_lock *p, *_p, *next = NULL;
struct key *key = vnode->lock_key;
-   unsigned int fl_type = F_RDLCK;
+   unsigned int type = F_RDLCK;
 
_enter("");
 
if (vnode->lock_type == AFS_LOCK_WRITE)
-   fl_type = F_WRLCK;
+   type = F_WRLCK;
 
list_for_each_entry_safe(p, _p, >pending_locks, fl_u.afs.link) {
if (error &&
-   p->fl_type == fl_type &&
+   p->fl_type == type &&
afs_file_key(p->fl_file) == key) {
list_del_init(>fl_u.afs.link);
p->fl_u.afs.state = error;

-- 
2.43.0




[PATCH v2 07/41] 9p: rename fl_type variable in v9fs_file_do_lock

2024-01-25 Thread Jeff Layton
In later patches, we're going to introduce some macros that conflict
with the variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/9p/vfs_file.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c
index bae330c2f0cf..3df8aa1b5996 100644
--- a/fs/9p/vfs_file.c
+++ b/fs/9p/vfs_file.c
@@ -121,7 +121,6 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
struct p9_fid *fid;
uint8_t status = P9_LOCK_ERROR;
int res = 0;
-   unsigned char fl_type;
struct v9fs_session_info *v9ses;
 
fid = filp->private_data;
@@ -208,11 +207,12 @@ static int v9fs_file_do_lock(struct file *filp, int cmd, 
struct file_lock *fl)
 * it locally
 */
if (res < 0 && fl->fl_type != F_UNLCK) {
-   fl_type = fl->fl_type;
+   unsigned char type = fl->fl_type;
+
fl->fl_type = F_UNLCK;
/* Even if this fails we want to return the remote error */
locks_lock_file_wait(filp, fl);
-   fl->fl_type = fl_type;
+   fl->fl_type = type;
}
if (flock.client_id != fid->clnt->name)
kfree(flock.client_id);

-- 
2.43.0




[PATCH v2 06/41] lockd: rename fl_flags and fl_type variables in nlmclnt_lock

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce some macros with names that
clash with the variable names here. Rename them.

Signed-off-by: Jeff Layton 
---
 fs/lockd/clntproc.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/lockd/clntproc.c b/fs/lockd/clntproc.c
index fba6c7fa7474..cc596748e359 100644
--- a/fs/lockd/clntproc.c
+++ b/fs/lockd/clntproc.c
@@ -522,8 +522,8 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
struct nlm_host *host = req->a_host;
struct nlm_res  *resp = >a_res;
struct nlm_wait block;
-   unsigned char fl_flags = fl->fl_flags;
-   unsigned char fl_type;
+   unsigned char flags = fl->fl_flags;
+   unsigned char type;
__be32 b_status;
int status = -ENOLCK;
 
@@ -533,7 +533,7 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
 
fl->fl_flags |= FL_ACCESS;
status = do_vfs_lock(fl);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = flags;
if (status < 0)
goto out;
 
@@ -595,7 +595,7 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
if (do_vfs_lock(fl) < 0)
printk(KERN_WARNING "%s: VFS is out of sync with lock 
manager!\n", __func__);
up_read(>h_rwsem);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = flags;
status = 0;
}
if (status < 0)
@@ -605,7 +605,7 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
 * cases NLM_LCK_DENIED is returned for a permanent error.  So
 * turn it into an ENOLCK.
 */
-   if (resp->status == nlm_lck_denied && (fl_flags & FL_SLEEP))
+   if (resp->status == nlm_lck_denied && (flags & FL_SLEEP))
status = -ENOLCK;
else
status = nlm_stat_to_errno(resp->status);
@@ -622,13 +622,13 @@ nlmclnt_lock(struct nlm_rqst *req, struct file_lock *fl)
   req->a_host->h_addrlen, req->a_res.status);
dprintk("lockd: lock attempt ended in fatal error.\n"
"   Attempting to unlock.\n");
-   fl_type = fl->fl_type;
+   type = fl->fl_type;
fl->fl_type = F_UNLCK;
down_read(>h_rwsem);
do_vfs_lock(fl);
up_read(>h_rwsem);
-   fl->fl_type = fl_type;
-   fl->fl_flags = fl_flags;
+   fl->fl_type = type;
+   fl->fl_flags = flags;
nlmclnt_async_call(cred, req, NLMPROC_UNLOCK, _unlock_ops);
return status;
 }
@@ -683,7 +683,7 @@ nlmclnt_unlock(struct nlm_rqst *req, struct file_lock *fl)
struct nlm_host *host = req->a_host;
struct nlm_res  *resp = >a_res;
int status;
-   unsigned char fl_flags = fl->fl_flags;
+   unsigned char flags = fl->fl_flags;
 
/*
 * Note: the server is supposed to either grant us the unlock
@@ -694,7 +694,7 @@ nlmclnt_unlock(struct nlm_rqst *req, struct file_lock *fl)
down_read(>h_rwsem);
status = do_vfs_lock(fl);
up_read(>h_rwsem);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = flags;
if (status == -ENOENT) {
status = 0;
goto out;

-- 
2.43.0




[PATCH v2 04/41] nfs: rename fl_flags variable in nfs4_proc_unlck

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce some temporary macros with
names that clash with the variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/nfs/nfs4proc.c  | 10 +-
 fs/nfs/nfs4state.c | 16 
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
index 23819a756508..5dd936a403f9 100644
--- a/fs/nfs/nfs4proc.c
+++ b/fs/nfs/nfs4proc.c
@@ -7045,7 +7045,7 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int 
cmd, struct file_lock *
struct rpc_task *task;
struct nfs_seqid *(*alloc_seqid)(struct nfs_seqid_counter *, gfp_t);
int status = 0;
-   unsigned char fl_flags = request->fl_flags;
+   unsigned char saved_flags = request->fl_flags;
 
status = nfs4_set_lock_state(state, request);
/* Unlock _before_ we do the RPC call */
@@ -7080,7 +7080,7 @@ static int nfs4_proc_unlck(struct nfs4_state *state, int 
cmd, struct file_lock *
status = rpc_wait_for_completion_task(task);
rpc_put_task(task);
 out:
-   request->fl_flags = fl_flags;
+   request->fl_flags = saved_flags;
trace_nfs4_unlock(request, state, F_SETLK, status);
return status;
 }
@@ -7398,7 +7398,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int 
cmd, struct file_lock
 {
struct nfs_inode *nfsi = NFS_I(state->inode);
struct nfs4_state_owner *sp = state->owner;
-   unsigned char fl_flags = request->fl_flags;
+   unsigned char flags = request->fl_flags;
int status;
 
request->fl_flags |= FL_ACCESS;
@@ -7410,7 +7410,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int 
cmd, struct file_lock
if (test_bit(NFS_DELEGATED_STATE, >flags)) {
/* Yes: cache locks! */
/* ...but avoid races with delegation recall... */
-   request->fl_flags = fl_flags & ~FL_SLEEP;
+   request->fl_flags = flags & ~FL_SLEEP;
status = locks_lock_inode_wait(state->inode, request);
up_read(>rwsem);
mutex_unlock(>so_delegreturn_mutex);
@@ -7420,7 +7420,7 @@ static int _nfs4_proc_setlk(struct nfs4_state *state, int 
cmd, struct file_lock
mutex_unlock(>so_delegreturn_mutex);
status = _nfs4_do_setlk(state, cmd, request, NFS_LOCK_NEW);
 out:
-   request->fl_flags = fl_flags;
+   request->fl_flags = flags;
return status;
 }
 
diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
index 9a5d911a7edc..471caf06fa7b 100644
--- a/fs/nfs/nfs4state.c
+++ b/fs/nfs/nfs4state.c
@@ -847,15 +847,15 @@ void nfs4_close_sync(struct nfs4_state *state, fmode_t 
fmode)
  */
 static struct nfs4_lock_state *
 __nfs4_find_lock_state(struct nfs4_state *state,
-  fl_owner_t fl_owner, fl_owner_t fl_owner2)
+  fl_owner_t owner, fl_owner_t owner2)
 {
struct nfs4_lock_state *pos, *ret = NULL;
list_for_each_entry(pos, >lock_states, ls_locks) {
-   if (pos->ls_owner == fl_owner) {
+   if (pos->ls_owner == owner) {
ret = pos;
break;
}
-   if (pos->ls_owner == fl_owner2)
+   if (pos->ls_owner == owner2)
ret = pos;
}
if (ret)
@@ -868,7 +868,7 @@ __nfs4_find_lock_state(struct nfs4_state *state,
  * exists, return an uninitialized one.
  *
  */
-static struct nfs4_lock_state *nfs4_alloc_lock_state(struct nfs4_state *state, 
fl_owner_t fl_owner)
+static struct nfs4_lock_state *nfs4_alloc_lock_state(struct nfs4_state *state, 
fl_owner_t owner)
 {
struct nfs4_lock_state *lsp;
struct nfs_server *server = state->owner->so_server;
@@ -879,7 +879,7 @@ static struct nfs4_lock_state *nfs4_alloc_lock_state(struct 
nfs4_state *state, f
nfs4_init_seqid_counter(>ls_seqid);
refcount_set(>ls_count, 1);
lsp->ls_state = state;
-   lsp->ls_owner = fl_owner;
+   lsp->ls_owner = owner;
lsp->ls_seqid.owner_id = ida_alloc(>lockowner_id, 
GFP_KERNEL_ACCOUNT);
if (lsp->ls_seqid.owner_id < 0)
goto out_free;
@@ -993,7 +993,7 @@ static int nfs4_copy_lock_stateid(nfs4_stateid *dst,
const struct nfs_lock_context *l_ctx)
 {
struct nfs4_lock_state *lsp;
-   fl_owner_t fl_owner, fl_flock_owner;
+   fl_owner_t owner, fl_flock_owner;
int ret = -ENOENT;
 
if (l_ctx == NULL)
@@ -1002,11 +1002,11 @@ static int nfs4_copy_lock_stateid(nfs4_stateid *dst,
if (test_bit(LK_STATE_IN_USE, >flags) == 0)
goto out;
 
-   fl_owner = l_ctx->lockowner;
+   owner = l_ctx->lockowner;
fl_flock_owner = l_ctx->open_context->flock_owner;
 
spin_lock(>state_lock);
-   lsp = __nfs4_find_

[PATCH v2 03/41] dlm: rename fl_flags variable in dlm_posix_unlock

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce some temporary macros with
names that clash with the variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/dlm/plock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/dlm/plock.c b/fs/dlm/plock.c
index d814c5121367..1b66b2d2b801 100644
--- a/fs/dlm/plock.c
+++ b/fs/dlm/plock.c
@@ -291,7 +291,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
struct dlm_ls *ls;
struct plock_op *op;
int rv;
-   unsigned char fl_flags = fl->fl_flags;
+   unsigned char saved_flags = fl->fl_flags;
 
ls = dlm_find_lockspace_local(lockspace);
if (!ls)
@@ -345,7 +345,7 @@ int dlm_posix_unlock(dlm_lockspace_t *lockspace, u64 
number, struct file *file,
dlm_release_plock_op(op);
 out:
dlm_put_lockspace(ls);
-   fl->fl_flags = fl_flags;
+   fl->fl_flags = saved_flags;
return rv;
 }
 EXPORT_SYMBOL_GPL(dlm_posix_unlock);

-- 
2.43.0




[PATCH v2 02/41] filelock: rename fl_pid variable in lock_get_status

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce some macros that will clash
with the variable name here. Rename it.

Signed-off-by: Jeff Layton 
---
 fs/locks.c | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/locks.c b/fs/locks.c
index cc7c117ee192..1eceaa56e47f 100644
--- a/fs/locks.c
+++ b/fs/locks.c
@@ -2695,11 +2695,11 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
loff_t id, char *pfx, int repeat)
 {
struct inode *inode = NULL;
-   unsigned int fl_pid;
+   unsigned int pid;
struct pid_namespace *proc_pidns = 
proc_pid_ns(file_inode(f->file)->i_sb);
int type;
 
-   fl_pid = locks_translate_pid(fl, proc_pidns);
+   pid = locks_translate_pid(fl, proc_pidns);
/*
 * If lock owner is dead (and pid is freed) or not visible in current
 * pidns, zero is shown as a pid value. Check lock info from
@@ -2747,11 +2747,11 @@ static void lock_get_status(struct seq_file *f, struct 
file_lock *fl,
 (type == F_RDLCK) ? "READ" : "UNLCK");
if (inode) {
/* userspace relies on this representation of dev_t */
-   seq_printf(f, "%d %02x:%02x:%lu ", fl_pid,
+   seq_printf(f, "%d %02x:%02x:%lu ", pid,
MAJOR(inode->i_sb->s_dev),
MINOR(inode->i_sb->s_dev), inode->i_ino);
} else {
-   seq_printf(f, "%d :0 ", fl_pid);
+   seq_printf(f, "%d :0 ", pid);
}
if (IS_POSIX(fl)) {
if (fl->fl_end == OFFSET_MAX)

-- 
2.43.0




[PATCH v2 01/41] filelock: rename some fields in tracepoints

2024-01-25 Thread Jeff Layton
In later patches we're going to introduce some macros with names that
clash with fields here. To prevent problems building, just rename the
fields in the trace entry structures.

Signed-off-by: Jeff Layton 
---
 include/trace/events/filelock.h | 76 -
 1 file changed, 38 insertions(+), 38 deletions(-)

diff --git a/include/trace/events/filelock.h b/include/trace/events/filelock.h
index 1646dadd7f37..8fb1d41b1c67 100644
--- a/include/trace/events/filelock.h
+++ b/include/trace/events/filelock.h
@@ -68,11 +68,11 @@ DECLARE_EVENT_CLASS(filelock_lock,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, fl_blocker)
-   __field(fl_owner_t, fl_owner)
-   __field(unsigned int, fl_pid)
-   __field(unsigned int, fl_flags)
-   __field(unsigned char, fl_type)
+   __field(struct file_lock *, blocker)
+   __field(fl_owner_t, owner)
+   __field(unsigned int, pid)
+   __field(unsigned int, flags)
+   __field(unsigned char, type)
__field(loff_t, fl_start)
__field(loff_t, fl_end)
__field(int, ret)
@@ -82,11 +82,11 @@ DECLARE_EVENT_CLASS(filelock_lock,
__entry->fl = fl ? fl : NULL;
__entry->s_dev = inode->i_sb->s_dev;
__entry->i_ino = inode->i_ino;
-   __entry->fl_blocker = fl ? fl->fl_blocker : NULL;
-   __entry->fl_owner = fl ? fl->fl_owner : NULL;
-   __entry->fl_pid = fl ? fl->fl_pid : 0;
-   __entry->fl_flags = fl ? fl->fl_flags : 0;
-   __entry->fl_type = fl ? fl->fl_type : 0;
+   __entry->blocker = fl ? fl->fl_blocker : NULL;
+   __entry->owner = fl ? fl->fl_owner : NULL;
+   __entry->pid = fl ? fl->fl_pid : 0;
+   __entry->flags = fl ? fl->fl_flags : 0;
+   __entry->type = fl ? fl->fl_type : 0;
__entry->fl_start = fl ? fl->fl_start : 0;
__entry->fl_end = fl ? fl->fl_end : 0;
__entry->ret = ret;
@@ -94,9 +94,9 @@ DECLARE_EVENT_CLASS(filelock_lock,
 
TP_printk("fl=%p dev=0x%x:0x%x ino=0x%lx fl_blocker=%p fl_owner=%p 
fl_pid=%u fl_flags=%s fl_type=%s fl_start=%lld fl_end=%lld ret=%d",
__entry->fl, MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
-   __entry->i_ino, __entry->fl_blocker, __entry->fl_owner,
-   __entry->fl_pid, show_fl_flags(__entry->fl_flags),
-   show_fl_type(__entry->fl_type),
+   __entry->i_ino, __entry->blocker, __entry->owner,
+   __entry->pid, show_fl_flags(__entry->flags),
+   show_fl_type(__entry->type),
__entry->fl_start, __entry->fl_end, __entry->ret)
 );
 
@@ -125,32 +125,32 @@ DECLARE_EVENT_CLASS(filelock_lease,
__field(struct file_lock *, fl)
__field(unsigned long, i_ino)
__field(dev_t, s_dev)
-   __field(struct file_lock *, fl_blocker)
-   __field(fl_owner_t, fl_owner)
-   __field(unsigned int, fl_flags)
-   __field(unsigned char, fl_type)
-   __field(unsigned long, fl_break_time)
-   __field(unsigned long, fl_downgrade_time)
+   __field(struct file_lock *, blocker)
+   __field(fl_owner_t, owner)
+   __field(unsigned int, flags)
+   __field(unsigned char, type)
+   __field(unsigned long, break_time)
+   __field(unsigned long, downgrade_time)
),
 
TP_fast_assign(
__entry->fl = fl ? fl : NULL;
__entry->s_dev = inode->i_sb->s_dev;
__entry->i_ino = inode->i_ino;
-   __entry->fl_blocker = fl ? fl->fl_blocker : NULL;
-   __entry->fl_owner = fl ? fl->fl_owner : NULL;
-   __entry->fl_flags = fl ? fl->fl_flags : 0;
-   __entry->fl_type = fl ? fl->fl_type : 0;
-   __entry->fl_break_time = fl ? fl->fl_break_time : 0;
-   __entry->fl_downgrade_time = fl ? fl->fl_downgrade_time : 0;
+   __entry->blocker = fl ? fl->fl_blocker : NULL;
+   __entry->owner = fl ? fl->fl_owner : NULL;
+   __entry->flags = fl ? fl->fl_flags : 0;
+   __entry->type = fl ? fl->fl_type : 0;
+   __entry->break_time = fl ? fl->fl_break_time : 0;
+   __entry->downgrade_time = fl ? fl->fl_downgrade_time : 0;
),
 
TP_printk("fl=%p dev=0x%x:

[PATCH v2 00/41] filelock: split struct file_lock into file_lock and file_lease structs

2024-01-25 Thread Jeff Layton
Long ago, file locks used to hang off of a singly-linked list in struct
inode. Because of this, when leases were added, they were added to the
same list and so they had to be tracked using the same sort of
structure.

Several years ago, we added struct file_lock_context, which allowed us
to use separate lists to track different types of file locks. Given
that, leases no longer need to be tracked using struct file_lock.

That said, a lot of the underlying infrastructure _is_ the same between
file leases and locks, so we can't completely separate everything.

This patchset first splits a group of fields used by both file locks and
leases into a new struct file_lock_core, that is then embedded in struct
file_lock. Coccinelle was then used to convert a lot of the callers to
deal with the move, with the remaining 25% or so converted by hand.

It then converts several internal functions in fs/locks.c to work
with struct file_lock_core. Lastly, struct file_lock is split into
struct file_lock and file_lease, and the lease-related APIs converted to
take struct file_lease.

After the first few patches (which I left split up for easier review),
the set should be bisectable. I'll plan to squash the first few
together to make sure the resulting set is bisectable before merge.

Finally, I left the coccinelle scripts I used in tree. I had heard it
was preferable to merge those along with the patches that they
generate, but I wasn't sure where they go. I can either move those to a
more appropriate location or we can just drop that commit if it's not
needed.

Signed-off-by: Jeff Layton 
---
Changes in v2:
- renamed file_lock_core fields to have "flc_" prefix
- used macros to more easily do the change piecemeal
- broke up patches into per-subsystem ones
- Link to v1: 
https://lore.kernel.org/r/20240116-flsplit-v1-0-c9d0f4370...@kernel.org

---
Jeff Layton (41):
  filelock: rename some fields in tracepoints
  filelock: rename fl_pid variable in lock_get_status
  dlm: rename fl_flags variable in dlm_posix_unlock
  nfs: rename fl_flags variable in nfs4_proc_unlck
  nfsd: rename fl_type and fl_flags variables in nfsd4_lock
  lockd: rename fl_flags and fl_type variables in nlmclnt_lock
  9p: rename fl_type variable in v9fs_file_do_lock
  afs: rename fl_type variable in afs_next_locker
  filelock: drop the IS_* macros
  filelock: split common fields into struct file_lock_core
  filelock: add coccinelle scripts to move fields to struct file_lock_core
  filelock: have fs/locks.c deal with file_lock_core directly
  filelock: convert some internal functions to use file_lock_core instead
  filelock: convert more internal functions to use file_lock_core
  filelock: make posix_same_owner take file_lock_core pointers
  filelock: convert posix_owner_key to take file_lock_core arg
  filelock: make locks_{insert,delete}_global_locks take file_lock_core arg
  filelock: convert locks_{insert,delete}_global_blocked
  filelock: make __locks_delete_block and __locks_wake_up_blocks take 
file_lock_core
  filelock: convert __locks_insert_block, conflict and deadlock checks to 
use file_lock_core
  filelock: convert fl_blocker to file_lock_core
  filelock: clean up locks_delete_block internals
  filelock: reorganize locks_delete_block and __locks_insert_block
  filelock: make assign_type helper take a file_lock_core pointer
  filelock: convert locks_wake_up_blocks to take a file_lock_core pointer
  filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx
  filelock: convert locks_translate_pid to take file_lock_core
  filelock: convert seqfile handling to use file_lock_core
  9p: adapt to breakup of struct file_lock
  afs: adapt to breakup of struct file_lock
  ceph: adapt to breakup of struct file_lock
  dlm: adapt to breakup of struct file_lock
  gfs2: adapt to breakup of struct file_lock
  lockd: adapt to breakup of struct file_lock
  nfs: adapt to breakup of struct file_lock
  nfsd: adapt to breakup of struct file_lock
  ocfs2: adapt to breakup of struct file_lock
  smb/client: adapt to breakup of struct file_lock
  smb/server: adapt to breakup of struct file_lock
  filelock: remove temporary compatability macros
  filelock: split leases out of struct file_lock

 cocci/filelock.cocci|  88 +
 cocci/nlm.cocci |  81 
 fs/9p/vfs_file.c|  40 +-
 fs/afs/flock.c  |  59 +--
 fs/ceph/locks.c |  74 ++--
 fs/dlm/plock.c  |  44 +--
 fs/gfs2/file.c  |  16 +-
 fs/libfs.c  |   2 +-
 fs/lockd/clnt4xdr.c |  14 +-
 fs/lockd/clntlock.c |   2 +-
 fs/lockd/clntproc.c |  65 +--
 fs/lockd/clntxdr.c  |  14 +-
 fs/lockd/svc4proc.c |  10 +-
 fs/lockd/svclock.c  |  64 +--
 fs/lockd

Re: [PATCH 00/20] filelock: split struct file_lock into file_lock and file_lease structs

2024-01-17 Thread Jeff Layton
On Wed, 2024-01-17 at 10:12 -0500, Chuck Lever wrote:
> On Tue, Jan 16, 2024 at 02:45:56PM -0500, Jeff Layton wrote:
> > Long ago, file locks used to hang off of a singly-linked list in struct
> > inode. Because of this, when leases were added, they were added to the
> > same list and so they had to be tracked using the same sort of
> > structure.
> > 
> > Several years ago, we added struct file_lock_context, which allowed us
> > to use separate lists to track different types of file locks. Given
> > that, leases no longer need to be tracked using struct file_lock.
> > 
> > That said, a lot of the underlying infrastructure _is_ the same between
> > file leases and locks, so we can't completely separate everything.
> 
> Naive question: locks and leases are similar. Why do they need to be
> split apart? The cover letter doesn't address that, and I'm new
> enough at this that I don't have that context.
> 

Leases and locks do have some similarities, but it's mostly the
internals (stuff like the blocker/waiter handling) where they are
similar. Superficially they are very different objects, and handling
them with the same struct is unintuitive.

So, for now this is just about cleaning up the lock and lease handling
APIs for better type safety and clarity. It's also nice to separate out
things like the kasync handling, which only applies to leases, as well
as splitting up the lock_manager_operations, which don't share any
operations between locks and leases.

Longer term, we're also considering adding things like directory
delegations, which may need to either expand struct file_lease, or add
a new variant (dir_deleg ?). I'd rather not add that complexity to
struct file_lock. 

> 
> > This patchset first splits a group of fields used by both file locks and
> > leases into a new struct file_lock_core, that is then embedded in struct
> > file_lock. Coccinelle was then used to convert a lot of the callers to
> > deal with the move, with the remaining 25% or so converted by hand.
> > 
> > It then converts several internal functions in fs/locks.c to work
> > with struct file_lock_core. Lastly, struct file_lock is split into
> > struct file_lock and file_lease, and the lease-related APIs converted to
> > take struct file_lease.
> > 
> > After the first few patches (which I left split up for easier review),
> > the set should be bisectable. I'll plan to squash the first few
> > together to make sure the resulting set is bisectable before merge.
> > 
> > Finally, I left the coccinelle scripts I used in tree. I had heard it
> > was preferable to merge those along with the patches that they
> > generate, but I wasn't sure where they go. I can either move those to a
> > more appropriate location or we can just drop that commit if it's not
> > needed.
> > 
> > I'd like to have this considered for inclusion in v6.9. Christian, would
> > you be amenable to shepherding this into mainline (assuming there are no
> > major objections, of course)?
> > 
> > Signed-off-by: Jeff Layton 
> > ---
> > Jeff Layton (20):
> >   filelock: split common fields into struct file_lock_core
> >   filelock: add coccinelle scripts to move fields to struct 
> > file_lock_core
> >   filelock: the results of the coccinelle conversion
> >   filelock: fixups after the coccinelle changes
> >   filelock: convert some internal functions to use file_lock_core 
> > instead
> >   filelock: convert more internal functions to use file_lock_core
> >   filelock: make posix_same_owner take file_lock_core pointers
> >   filelock: convert posix_owner_key to take file_lock_core arg
> >   filelock: make locks_{insert,delete}_global_locks take file_lock_core 
> > arg
> >   filelock: convert locks_{insert,delete}_global_blocked
> >   filelock: convert the IS_* macros to take file_lock_core
> >   filelock: make __locks_delete_block and __locks_wake_up_blocks take 
> > file_lock_core
> >   filelock: convert __locks_insert_block, conflict and deadlock checks 
> > to use file_lock_core
> >   filelock: convert fl_blocker to file_lock_core
> >   filelock: clean up locks_delete_block internals
> >   filelock: reorganize locks_delete_block and __locks_insert_block
> >   filelock: make assign_type helper take a file_lock_core pointer
> >   filelock: convert locks_wake_up_blocks to take a file_lock_core 
> > pointer
> >   filelock: convert locks_insert_lock_ctx and locks_delete_lock_ctx
> >   filelock: split leases out of struct file_lock
> > 
> >  cocci/filelock.cocci|  81 +

Re: [PATCH 02/20] filelock: add coccinelle scripts to move fields to struct file_lock_core

2024-01-17 Thread Jeff Layton
On Wed, 2024-01-17 at 13:25 +, David Howells wrote:
> Do we need to keep these coccinelle scripts for posterity?  Or can they just
> be included in the patch description of the patch that generates them?
> 

I have the same question. I included them here mostly so they can be
reviewed as well, but I'm not sure whether and how we should retain them
for posterity.
-- 
Jeff Layton 



Re: [PATCH 00/20] filelock: split struct file_lock into file_lock and file_lease structs

2024-01-17 Thread Jeff Layton
On Wed, 2024-01-17 at 13:48 +0100, Christian Brauner wrote:
> > I'd like to have this considered for inclusion in v6.9. Christian, would
> > you be amenable to shepherding this into mainline (assuming there are no
> > major objections, of course)?
> 
> Yes, of course I will be happy to.

Great! I probably have at least another version or two to send before
it's ready for linux-next, but hopefully we can get it there soon after
the merge window closes.

Thanks,
-- 
Jeff Layton 



Re: [PATCH 01/20] filelock: split common fields into struct file_lock_core

2024-01-17 Thread Jeff Layton
On Wed, 2024-01-17 at 09:07 +1100, NeilBrown wrote:
> On Wed, 17 Jan 2024, Jeff Layton wrote:
> > In a future patch, we're going to split file leases into their own
> > structure. Since a lot of the underlying machinery uses the same fields
> > move those into a new file_lock_core, and embed that inside struct
> > file_lock.
> > 
> > Signed-off-by: Jeff Layton 
> > ---
> >  include/linux/filelock.h | 9 +++--
> >  1 file changed, 7 insertions(+), 2 deletions(-)
> > 
> > diff --git a/include/linux/filelock.h b/include/linux/filelock.h
> > index 95e868e09e29..7825511c1c11 100644
> > --- a/include/linux/filelock.h
> > +++ b/include/linux/filelock.h
> > @@ -85,8 +85,9 @@ bool opens_in_grace(struct net *);
> >   *
> >   * Obviously, the last two criteria only matter for POSIX locks.
> >   */
> > -struct file_lock {
> > -   struct file_lock *fl_blocker;   /* The lock, that is blocking us */
> > +
> > +struct file_lock_core {
> > +   struct file_lock *fl_blocker;   /* The lock that is blocking us */
> > struct list_head fl_list;   /* link into file_lock_context */
> > struct hlist_node fl_link;  /* node in global lists */
> > struct list_head fl_blocked_requests;   /* list of requests with
> > @@ -102,6 +103,10 @@ struct file_lock {
> > int fl_link_cpu;/* what cpu's list is this on? */
> > wait_queue_head_t fl_wait;
> > struct file *fl_file;
> > +};
> > +
> > +struct file_lock {
> > +   struct file_lock_core fl_core;
> > loff_t fl_start;
> > loff_t fl_end;
> >  
> 
> If I we doing this, I would rename all the fields in file_lock_core to
> have an "flc_" prefix, and add some #defines like
> 
>  #define fl_list fl_core.flc_list
> 
> so there would be no need to squash this with later patches to achieve
> bisectability.
> 
> The #defines would be removed after the coccinelle scripts etc are
> applied.
> 
> I would also do the "convert some internal functions" patches *before*
> the bulk conversion of fl_foo to fl_code.flc_foo so that those functions
> don't get patched twice.
> 
> But this is all personal preference.  If you prefer your approach,
> please leave it that way.  The only clear benefit of my approach is that
> you don't need to squash patches together, and that is probably not a
> big deal.
> 

I considered going back and doing that. It would allow me to break this
up into smaller patches, but I think that basically means doing all of
this work over again. I'll probably stick with this approach, unless I
end up needing to respin for other reasons.

-- 
Jeff Layton 



Re: [PATCH 20/20] filelock: split leases out of struct file_lock

2024-01-17 Thread Jeff Layton
On Wed, 2024-01-17 at 09:44 +1100, NeilBrown wrote:
> On Wed, 17 Jan 2024, Jeff Layton wrote:
> > Add a new struct file_lease and move the lease-specific fields from
> > struct file_lock to it. Convert the appropriate API calls to take
> > struct file_lease instead, and convert the callers to use them.
> 
> I think that splitting of struct lease_manager_operations out from
> lock_manager_operations should be mentioned here too.
> 

Will do.

> 
> >  
> > +struct file_lease {
> > +   struct file_lock_core fl_core;
> > +   struct fasync_struct *  fl_fasync; /* for lease break notifications */
> > +   /* for lease breaks: */
> > +   unsigned long fl_break_time;
> > +   unsigned long fl_downgrade_time;
> > +   const struct lease_manager_operations *fl_lmops;/* Callbacks 
> > for lockmanagers */
> 
> comment should be "Callbacks for leasemanagers".  Or maybe 
> "lease managers". 
> 
> It is unfortunate that "lock" and "lease" both start with 'l' as we now
> have two quite different fields in different structures with the same
> name - fl_lmops.
> 

Hah, I had sort of considered that an advantage since I didn't need to
change as many call sites! Still, I get your point that having distinct
names is preferable.

I can change this to be distinct. I'll just need to come up with a
reasonable variable name (never my strong suit).

-- 
Jeff Layton 



  1   2   3   4   5   6   7   8   9   10   >