Re: [PATCH mm-unstable v2] mm/memcontrol: batch memcg charging in __memcg_slab_post_alloc_hook
From: Andrew Morton
Date: Fri Mar 27 2026 - 02:33:09 EST
On Fri, 20 Mar 2026 10:07:45 +0800 Hui Zhu <hui.zhu@xxxxxxxxx> wrote:
> From: teawater <zhuhui@xxxxxxxxxx>
We do prefer real names in Linux commits, please. Can I rewrite this
patch as From: Hui Zhu <hui.zhu@xxxxxxxxx>?
> When kmem_cache_alloc_bulk() allocates multiple objects, the post-alloc
> hook __memcg_slab_post_alloc_hook() previously charged memcg one object
> at a time, even though consecutive objects may reside on slabs backed by
> the same pgdat node.
>
> Batch the memcg charging by scanning ahead from the current position to
> find a contiguous run of objects whose slabs share the same pgdat, then
> issue a single __obj_cgroup_charge() / __consume_obj_stock() call for
> the entire run. The per-object obj_ext assignment loop is preserved as-is
> since it cannot be further collapsed.
>
> This implements the TODO comment left in commit bc730030f956 ("memcg:
> combine slab obj stock charging and accounting").
>
> The existing error-recovery contract is unchanged: if size == 1 then
> memcg_alloc_abort_single() will free the sole object, and for larger
> bulk allocations kmem_cache_free_bulk() will uncharge any objects that
> were already charged before the failure.
>
> Benchmark using kmem_cache_alloc_bulk() with SLAB_ACCOUNT
> (iters=100000):
>
> bulk=32 before: 215 ns/object after: 174 ns/object (-19%)
> bulk=1 before: 344 ns/object after: 335 ns/object ( ~)
>
> No measurable regression for bulk=1, as expected.
Could memcg maintainers please review this?
Thanks.