Re: [PATCH v2 0/9] mm/huge_memory: refactor zap_huge_pmd()
From: Mike Rapoport
Date: Tue Mar 24 2026 - 04:20:53 EST
On Mon, Mar 23, 2026 at 02:36:04PM -0700, Andrew Morton wrote:
> On Mon, 23 Mar 2026 12:34:31 +0000 Pedro Falcato <pfalcato@xxxxxxx> wrote:
> >
> > FWIW I wholeheartedly agree. I don't understand how we don't require proper
> > M: or R: reviews on patches before merging
>
> I wish people would stop making this claim, without substantiation.
> I've looked (deeply) at the data, which is equally available to us all.
> Has anyone else?
>
> After weeding out a few special cases (especially DAMON) (this time
> also maple_tree), the amount of such unreviewed material which enters
> mm-stable and mainline is very very low.
Here's a breakout of MM commit tags (with DAMON excluded) since 6.10:
------------------------------------------------------------------------------
Release Total Reviewed-by Acked-by only No review DAMON excl
------------------------------------------------------------------------------
v6.10 318 206 (65%) 36 (11%) 76 (24%) 10
v6.11 270 131 (49%) 72 (27%) 67 (25%) 17
v6.12 333 161 (48%) 65 (20%) 107 (32%) 18
v6.13 180 94 (52%) 29 (16%) 57 (32%) 8
v6.14 217 103 (47%) 40 (18%) 74 (34%) 30
v6.15 289 129 (45%) 45 (16%) 115 (40%) 43
v6.16 198 126 (64%) 44 (22%) 28 (14%) 16
v6.17 245 181 (74%) 41 (17%) 23 (9%) 53
v6.18 205 150 (73%) 28 (14%) 27 (13%) 34
v6.19 228 165 (72%) 33 (14%) 30 (13%) 64
------------------------------------------------------------------------------
There's indeed sharp reduction in amount of unreviewed material that gets
merged since v6.15, i.e. after the last LSF/MM when we updated the process
and nominated people as sub-maintainers and reviewers for different parts
of MM. This very much confirms that splitting up the MM entry and letting
people to step up as sub-maintaners pays off.
But we are still at double digits for percentage of commits without
Reviewed-by tags despite the effort people (especially David and Lorenzo)
are putting into review. I wouldn't say that even 9% is "very very low".
> > Like, sure, sashiko can be useful, and is better than nothing. But unless
> > sashiko is better than the maintainers, it should be kept as optional.
>
> Rule #1 is, surely, "don't add bugs". This thing finds bugs. If its
> hit rate is 50% then that's plenty high enough to justify people
> spending time to go through and check its output.
>
> > Seriously, I can't wrap my head around the difference in treatment in
> > "human maintainers, experts in the code, aren't required to review a patch"
>
> Speaking of insulting.
>
> > vs "make the fscking AI happy or it's not going anywhere". It's almost
> > insulting.
>
> Look, I know people are busy. If checking these reports slows us down
> and we end up merging less code and less buggy code then that's a good
> tradeoff.
If you think this is a good trade-off, then slowing down to wait for human
review so we merge up less buggy or less maintainable code is a good
trade-off too.
While LLMs can detect potential bugs, they are not capable to identify
potential maintainability issues.
> Also, gimme a break. Like everyone else I'm still trying to wrap my
> head how best to incorporate this new tool into our development
> processes.
It would be nice if we had a more formal description of our development
process in Documentation/process/maintainer-mm.rst and then we can add a
few sentences about how to incorporate this tool into the process when we
figure this out.
Right now our process is a tribal knowledge, having "Rule #1" and a few
others written down would help everyone who participates in MM development.
--
Sincerely yours,
Mike.