Re: [PATCH v3 1/2] mm/mglru: fix cgroup OOM during MGLRU state switching

From: Barry Song

Date: Tue Mar 17 2026 - 03:56:26 EST


On Mon, Mar 16, 2026 at 1:56 PM Leno Hou via B4 Relay
<devnull+lenohou.gmail.com@xxxxxxxxxx> wrote:
>
> From: Leno Hou <lenohou@xxxxxxxxx>
>
> When the Multi-Gen LRU (MGLRU) state is toggled dynamically, a race
> condition exists between the state switching and the memory reclaim path.
> This can lead to unexpected cgroup OOM kills, even when plenty of
> reclaimable memory is available.
>
> Problem Description
> ==================
>
> The issue arises from a "reclaim vacuum" during the transition.
>
> 1. When disabling MGLRU, lru_gen_change_state() sets lrugen->enabled to
> false before the pages are drained from MGLRU lists back to traditional
> LRU lists.
> 2. Concurrent reclaimers in shrink_lruvec() see lrugen->enabled as false
> and skip the MGLRU path.
> 3. However, these pages might not have reached the traditional LRU lists
> yet, or the changes are not yet visible to all CPUs due to a lack
> of synchronization.
> 4. get_scan_count() subsequently finds traditional LRU lists empty,
> concludes there is no reclaimable memory, and triggers an OOM kill.
>
> A similar race can occur during enablement, where the reclaimer sees the
> new state but the MGLRU lists haven't been populated via fill_evictable()
> yet.
>
> Solution
> ========
>
> Introduce a 'draining' state (`lru_drain_core`) to bridge the transition.
> When transitioning, the system enters this intermediate state where
> the reclaimer is forced to attempt both MGLRU and traditional reclaim
> paths sequentially. This ensures that folios remain visible to at least
> one reclaim mechanism until the transition is fully materialized across
> all CPUs.
>
> Changes
> =======
>
> v3:
> - Rebase onto mm-new branch for queue testing
> - Don't look around while draining
> - Fix Barry Song's comment
>
> v2:
> - Repalce with a static branch `lru_drain_core` to track the transition
> state.
> - Ensures all LRU helpers correctly identify page state by checking
> folio_lru_gen(folio) != -1 instead of relying solely on global flags.
> - Maintain workingset refault context across MGLRU state transitions
> - Fix build error when CONFIG_LRU_GEN is disabled.
>
> v1:
> - Use smp_store_release() and smp_load_acquire() to ensure the visibility
> of 'enabled' and 'draining' flags across CPUs.
> - Modify shrink_lruvec() to allow a "joint reclaim" period. If an lruvec
> is in the 'draining' state, the reclaimer will attempt to scan MGLRU
> lists first, and then fall through to traditional LRU lists instead
> of returning early. This ensures that folios are visible to at least
> one reclaim path at any given time.
>
> Race & Mitigation
> ================
>
> A race window exists between checking the 'draining' state and performing
> the actual list operations. For instance, a reclaimer might observe the
> draining state as false just before it changes, leading to a suboptimal
> reclaim path decision.
>
> However, this impact is effectively mitigated by the kernel's reclaim
> retry mechanism (e.g., in do_try_to_free_pages). If a reclaimer pass fails
> to find eligible folios due to a state transition race, subsequent retries
> in the loop will observe the updated state and correctly direct the scan
> to the appropriate LRU lists. This ensures the transient inconsistency
> does not escalate into a terminal OOM kill.
>
> This effectively reduce the race window that previously triggered OOMs
> under high memory pressure.
>
> To: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> To: Axel Rasmussen <axelrasmussen@xxxxxxxxxx>
> To: Yuanchu Xie <yuanchu@xxxxxxxxxx>
> To: Wei Xu <weixugc@xxxxxxxxxx>
> To: Barry Song <21cnbao@xxxxxxxxx>
> To: Jialing Wang <wjl.linux@xxxxxxxxx>
> To: Yafang Shao <laoar.shao@xxxxxxxxx>
> To: Yu Zhao <yuzhao@xxxxxxxxxx>
> To: Kairui Song <ryncsn@xxxxxxxxx>
> To: Bingfang Guo <bfguo@xxxxxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Signed-off-by: Leno Hou <lenohou@xxxxxxxxx>
> ---
> include/linux/mm_inline.h | 16 ++++++++++++++++
> mm/rmap.c | 2 +-
> mm/swap.c | 15 +++++++++------
> mm/vmscan.c | 38 +++++++++++++++++++++++++++++---------
> 4 files changed, 55 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index ad50688d89db..16ac700dac9c 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -102,6 +102,12 @@ static __always_inline enum lru_list folio_lru_list(const struct folio *folio)
>
> #ifdef CONFIG_LRU_GEN
>
> +static inline bool lru_gen_draining(void)
> +{
> + DECLARE_STATIC_KEY_FALSE(lru_drain_core);
> +
> + return static_branch_unlikely(&lru_drain_core);
> +}
> #ifdef CONFIG_LRU_GEN_ENABLED
> static inline bool lru_gen_enabled(void)
> {
> @@ -316,11 +322,21 @@ static inline bool lru_gen_enabled(void)
> return false;
> }
>
> +static inline bool lru_gen_draining(void)
> +{
> + return false;
> +}
> +
> static inline bool lru_gen_in_fault(void)
> {
> return false;
> }
>
> +static inline int folio_lru_gen(const struct folio *folio)
> +{
> + return -1;
> +}
> +
> static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming)
> {
> return false;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 6398d7eef393..0b5f663f3062 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -966,7 +966,7 @@ static bool folio_referenced_one(struct folio *folio,
> nr = folio_pte_batch(folio, pvmw.pte, pteval, max_nr);
> }
>
> - if (lru_gen_enabled() && pvmw.pte) {
> + if (lru_gen_enabled() && !lru_gen_draining() && pvmw.pte) {
> if (lru_gen_look_around(&pvmw, nr))
> referenced++;
> } else if (pvmw.pte) {
> diff --git a/mm/swap.c b/mm/swap.c
> index 5cc44f0de987..ecb192c02d2e 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -462,7 +462,7 @@ void folio_mark_accessed(struct folio *folio)
> {
> if (folio_test_dropbehind(folio))
> return;
> - if (lru_gen_enabled()) {
> + if (folio_lru_gen(folio) != -1) {

I still feel this is quite dangerous. A folio could be on the
lru_cache rather than on MGLRU’s lists.

This still changes MGLRU’s behavior, much like your v2, which
effectively disabled look_around.

I mentioned this in v2: please avoid depending on
folio_lru_gen() == -1 unless it is absolutely necessary and you are
certain the folio is on an LRU list.

This is hard to verify case by case. From a design perspective,
relying on folio_lru_gen() == -1 is not appropriate.

Thanks
Barry