[PATCH v3 0/7] mm: switch THP shrinker to list_lru

From: Johannes Weiner

Date: Wed Mar 18 2026 - 16:06:20 EST


This is version 3 of switching the THP shrinker to list_lru.

Changes in v3:
- dedicated lockdep_key for irqsafe deferred_split_lru.lock (syzbot)
- conditional list_lru ops in __folio_freeze_and_split_unmapped() (syzbot)
- annotate runs of inscrutable false, NULL, false function arguments (David)
- rename to folio_memcg_list_lru_alloc() (David)

Changes in v2:
- explicit rcu_read_lock() in __folio_freeze_and_split_unmapped() (Usama)
- split out list_lru prep bits (Dave)

The open-coded deferred split queue has issues. It's not NUMA-aware
(when cgroup is enabled), and it's more complicated in the callsites
interacting with it. Switching to list_lru fixes the NUMA problem and
streamlines things. It also simplifies planned shrinker work.

Patches 1-4 are cleanups and small refactors in list_lru code. They're
basically independent, but make the THP shrinker conversion easier.

Patch 5 extends the list_lru API to allow the caller to control the
locking scope. The THP shrinker has private state it needs to keep
synchronized with the LRU state.

Patch 6 extends the list_lru API with a convenience helper to do
list_lru head allocation (memcg_list_lru_alloc) when coming from a
folio. Anon THPs are instantiated in several places, and with the
folio reparenting patches pending, folio_memcg() access is now a more
delicate dance. This avoids having to replicate that dance everywhere.

Patch 7 finally switches the deferred_split_queue to list_lru.

Based on mm-unstable.

include/linux/huge_mm.h | 6 +-
include/linux/list_lru.h | 46 ++++++
include/linux/memcontrol.h | 4 -
include/linux/mmzone.h | 12 --
mm/huge_memory.c | 342 ++++++++++++++-----------------------------
mm/internal.h | 2 +-
mm/khugepaged.c | 7 +
mm/list_lru.c | 196 ++++++++++++++++---------
mm/memcontrol.c | 12 +-
mm/memory.c | 52 ++++---
mm/mm_init.c | 15 --
11 files changed, 323 insertions(+), 371 deletions(-)