Re: [PATCH] mm/percpu, memcontrol: Per-memcg-lruvec percpu accounting
From: Joshua Hahn
Date: Mon Mar 30 2026 - 17:18:19 EST
On Mon, 30 Mar 2026 16:21:12 +0200 Michal Hocko <mhocko@xxxxxxxx> wrote:
> On Mon 30-03-26 07:10:10, Joshua Hahn wrote:
> > On Mon, 30 Mar 2026 14:03:29 +0200 Michal Hocko <mhocko@xxxxxxxx> wrote:
> >
> > > On Fri 27-03-26 12:19:35, Joshua Hahn wrote:
> > > > Convert MEMCG_PERCPU_B from a memcg_stat_item to a memcg_node_stat_item
> > > > to give visibility into per-node breakdowns for percpu allocations and
> > > > turn it into NR_PERCPU_B.
> > >
> > > Why do we need/want this?
> >
> > Hello Michal,
> >
> > Thank you for reviewing my patch! I hope you are doing well.
> >
> > You're right, I could have done a better job of motivating the patch.
> > My intent with this patch is to give some more visibility into where
> > memory is physically, once you know which memcg it is in.
>
> Please keep in mind that WHY is very often much more important than HOW
> in the patch so you should always start with the intention and
> justification.
>
> > Percpu memory could probably be seen as "trivial" when it comes to figuring
> > out what node it is on, but I'm hoping to make similar transitions to the
> > rest of enum memcg_stat_item as well (you can see my work for the zswap
> > stats in [1]).
> >
> > When all of the memory is moved from being tracked per-memcg to per-lruvec,
> > then the final vision would be able to attribute node placement within
> > each memcg, which can help with diagnosing things like asymmetric node
> > pressure within a memcg, which is currently only partially accurate.
> >
> > Getting per-node breakdowns of percpu memory orthogonal to memcgs also
> > seems like a win to me. While unlikely, I think that we can benefit from
> > some amount of visibility into whether percpu allocations are happening
> > equally across all CPUs.
> >
> > What do you think? Thank you again, I hope you have a great day!
>
> I think that you should have started with this intended outcome first
> rather than slicing it in pieces. Why do we want to shift to per-node
> stats for other/all counters? What is the cost associated comparing to the
> existing accounting (if any)?
I went and ran a few tests, which seem to show rather negligible performance
differences (phew). I wrote a kernel module that does 100k percpu allocations
via __alloc_percpu_gfp with GFP_KERNEL | __GFP_ACCOUNT in a cgroup. I then
measured how long each allocation takes across two trials, one where I do
all 100k allocations and then free all of them at once, and another where I
interleave the allocs and frees. Everything below is ns / alloc, and the
+/- is the standard deviation across 20 trials.
+-------------+----------------+--------------+--------------+
| Test | linus-upstream | patch | diff |
+-------------+----------------+--------------+--------------+
| Batched | 6586 +/- 51 | 6595 +/- 35 | +9 (0.13%) |
| Interleaved | 1053 +/- 126 | 1085 +/- 113 | +32 (+0.85%) |
+-------------+----------------+--------------+--------------+
I'll include this, as well as the additional memory overhead that Yosry
suggested to include in a v2. I think we can get more accurate accounting
by distributing the obj_cgroup pointer size across the CPUs, so I've
gone ahead and made another iteration.
Thank you again for your insight, Michal!
Joshua