Re: [PATCH v3 5/7] mm: list_lru: introduce caller locking for additions and deletions
From: Johannes Weiner
Date: Fri Mar 20 2026 - 12:18:56 EST
On Wed, Mar 18, 2026 at 01:51:04PM -0700, Shakeel Butt wrote:
> On Wed, Mar 18, 2026 at 03:53:23PM -0400, Johannes Weiner wrote:
> > Locking is currently internal to the list_lru API. However, a caller
> > might want to keep auxiliary state synchronized with the LRU state.
> >
> > For example, the THP shrinker uses the lock of its custom LRU to keep
> > PG_partially_mapped and vmstats consistent.
> >
> > To allow the THP shrinker to switch to list_lru, provide normal and
> > irqsafe locking primitives as well as caller-locked variants of the
> > addition and deletion functions.
> >
> > Reviewed-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>
> > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
>
> One nit below, other than that:
>
> Acked-by: Shakeel Butt <shakeel.butt@xxxxxxxxx>
>
> >
> > -static inline void lock_list_lru(struct list_lru_one *l, bool irq)
> > +static inline void lock_list_lru(struct list_lru_one *l, bool irq,
> > + unsigned long *irq_flags)
> > {
> > - if (irq)
> > + if (irq_flags)
> > + spin_lock_irqsave(&l->lock, *irq_flags);
> > + else if (irq)
>
> If we move __list_lru_walk_one to use irq_flags then we can remove the irq
> param. It is reclaim code path and I don't think additional cost of irqsave
> would matter here.
The workingset shrinker's isolation function uses unlock_irq() and
cond_resched(). That would be non-trivial to rewrite - pass flags
around; keep irqs disabled for the whole reclaim cycle; break it into
a two-stage process. This sounds like a higher maintenance burden than
the bool here.
I know there is some cost to this distinction, but I actually do find
it useful to know the difference. It's self-documenting context.