Re: [PATCH 2/3] mm/memcontrol: disable demotion in memcg direct reclaim

From: Bing Jiao

Date: Sat Mar 21 2026 - 00:05:01 EST


On Fri, Mar 20, 2026 at 06:47:14PM +0530, Donet Tom wrote:
> Hi Bing
>
> On 3/18/26 4:37 AM, Bing Jiao wrote:
> > NUMA demotion counts towards reclaim targets in shrink_folio_list(), but
> > it does not reduce the total memory usage of a memcg. In memcg direct
> > reclaim paths (e.g., charge-triggered or manual limit writes), where
> > demotion is allowed, this leads to "fake progress" where the reclaim
> > loop concludes it has satisfied the memory request without actually
> > reducing the cgroup's charge.
> >
> > This could result in inefficient reclaim loops, CPU waste, moving all
> > pages to far-tier nodes, and potentially premature OOM kills when the
> > cgroup is under memory pressure but demotion is still possible.
> >
> > Introduce the MEMCG_RECLAIM_NO_DEMOTION flag to disable demotion in
> > these memcg-specific reclaim paths. This ensures that reclaim
> > progress is only counted when memory is actually freed or swapped out.

Hi, Donet,

Thank you for the feedback and reviewing the patch.

> Thanks for the patch. With this change, are we completely disabling memory
> tiering in memcg?

Yes, this change will completely disable demotion from memcg
directly reclaim, as demotion does not help to reduce memory usage.

>
> Did you run any performance benchmarks with this patch?
>
>
> This patch looks good to me. Feel free to add
>
> Reviewed by: Donet Tom <donettom@xxxxxxxxxxxxx>

Thanks again for the review!

Following a discussion with Yosry regarding demotion as an aging process,
I have decided to drop patches 2 and 3 from this series for now.

Additionally, Joshua Hahn's RFC ('Make memcg limits tier-aware') [1]
introduces a mechanism to scale memcg limits based on the ratio of
top-tier to total memory. This approach or similar approaches might
provide a more comprehensive way to resolve 'fake progress' in memcg
direct reclaim or establish a better framework for addressing such
issues in the future.

Hope you have great weekend!

Best regards,
Bing

[1] https://lore.kernel.org/linux-mm/20260223223830.586018-1-joshua.hahnjy@xxxxxxxxx/