[PATCH 1/1] mm/vmscan: prevent MGLRU reclaim from pinning address space

From: Suren Baghdasaryan

Date: Sun Mar 22 2026 - 03:09:00 EST


When shrinking lruvec, MGLRU pins address space before walking it.
This is excessive since all it needs for walking the page range is
a stable mm_struct to be able to take and release mmap_read_lock and
a stable mm->mm_mt tree to walk. This address space pinning results
in delays when releasing the memory of a dying process. This also
prevents mm reapers (both in-kernel oom-reaper and userspace
process_mrelease()) from doing their job during MGLRU scan because
they check task_will_free_mem() which will yield negative result due
to the elevated mm->mm_users.

Replace unnecessary address space pinning with mm_struct pinning by
replacing mmget/mmput with mmgrab/mmdrop calls. mm_mt is contained
within mm_struct itself, therefore it won't be freed as long as
mm_struct is stable and it won't change during the walk because
mmap_read_lock is being held.

Fixes: bd74fdaea146 ("mm: multi-gen LRU: support page table walks")
Signed-off-by: Suren Baghdasaryan <surenb@xxxxxxxxxx>
---
mm/vmscan.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 33287ba4a500..68e8e90e38f5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2863,8 +2863,9 @@ static struct mm_struct *get_next_mm(struct lru_gen_mm_walk *walk)
return NULL;

clear_bit(key, &mm->lru_gen.bitmap);
+ mmgrab(mm);

- return mmget_not_zero(mm) ? mm : NULL;
+ return mm;
}

void lru_gen_add_mm(struct mm_struct *mm)
@@ -3064,7 +3065,7 @@ static bool iterate_mm_list(struct lru_gen_mm_walk *walk, struct mm_struct **ite
reset_bloom_filter(mm_state, walk->seq + 1);

if (*iter)
- mmput_async(*iter);
+ mmdrop(*iter);

*iter = mm;


base-commit: 8c65073d94c8b7cc3170de31af38edc9f5d96f0e
--
2.53.0.1018.g2bb0e51243-goog