On Mon, 2 Mar 2026 15:10:03 +0100 "Vlastimil Babka (SUSE)" <[email protected]> 
wrote:

> On 2/27/26 17:00, Dmitry Ilvokhin wrote:
> > This intentionally breaks direct users of zone->lock at compile time so
> > all call sites are converted to the zone lock wrappers. Without the
> > rename, present and future out-of-tree code could continue using
> > spin_lock(&zone->lock) and bypass the wrappers and tracing
> > infrastructure.
> > 
> > No functional change intended.
> > 
> > Suggested-by: Andrew Morton <[email protected]>
> > Signed-off-by: Dmitry Ilvokhin <[email protected]>
> > Acked-by: Shakeel Butt <[email protected]>
> > Acked-by: SeongJae Park <[email protected]>
> 
> I see some more instances of 'zone->lock' in comments in
> include/linux/mmzone.h and under Documentation/ but otherwise LGTM.
> 

I fixed (most of) that in the previous version but my fix was lost.


 include/linux/mmzone.h |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

--- a/include/linux/mmzone.h~mm-rename-zone-lock-to-zone-_lock-fix
+++ a/include/linux/mmzone.h
@@ -1037,12 +1037,12 @@ struct zone {
         * Locking rules:
         *
         * zone_start_pfn and spanned_pages are protected by span_seqlock.
-        * It is a seqlock because it has to be read outside of zone->lock,
+        * It is a seqlock because it has to be read outside of zone_lock,
         * and it is done in the main allocator path.  But, it is written
         * quite infrequently.
         *
-        * The span_seq lock is declared along with zone->lock because it is
-        * frequently read in proximity to zone->lock.  It's good to
+        * The span_seq lock is declared along with zone_lock because it is
+        * frequently read in proximity to zone_lock.  It's good to
         * give them a chance of being in the same cacheline.
         *
         * Write access to present_pages at runtime should be protected by
@@ -1065,7 +1065,7 @@ struct zone {
        /*
         * Number of isolated pageblock. It is used to solve incorrect
         * freepage counting problem due to racy retrieving migratetype
-        * of pageblock. Protected by zone->lock.
+        * of pageblock. Protected by zone_lock.
         */
        unsigned long           nr_isolate_pageblock;
 #endif
@@ -1502,7 +1502,7 @@ typedef struct pglist_data {
         * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG
         * or CONFIG_DEFERRED_STRUCT_PAGE_INIT.
         *
-        * Nests above zone->lock and zone->span_seqlock
+        * Nests above zone_lock and zone->span_seqlock
         */
        spinlock_t node_size_lock;
 #endif
_


Reply via email to