Re: [PATCH v7 0/2] skip redundant sync IPIs when TLB flush sent them

From: Lance Yang

Date: Tue Mar 24 2026 - 02:27:30 EST



On Mon, Mar 23, 2026 at 01:53:17PM -0700, Andrew Morton wrote:
>On Mon, 9 Mar 2026 10:07:09 +0800 Lance Yang <lance.yang@xxxxxxxxx> wrote:
>
>> Hi all,
>>
>> When page table operations require synchronization with software/lockless
>> walkers, they call tlb_remove_table_sync_{one,rcu}() after flushing the
>> TLB (tlb->freed_tables or tlb->unshared_tables).
>>
>> On architectures where the TLB flush already sends IPIs to all target CPUs,
>> the subsequent sync IPI broadcast is redundant. This is not only costly on
>> large systems where it disrupts all CPUs even for single-process page table
>> operations, but has also been reported to hurt RT workloads[1].
>>
>> This series introduces tlb_table_flush_implies_ipi_broadcast() to check if
>> the prior TLB flush already provided the necessary synchronization. When
>> true, the sync calls can early-return.
>>
>> A few cases rely on this synchronization:
>>
>> 1) hugetlb PMD unshare[2]: The problem is not the freeing but the reuse
>> of the PMD table for other purposes in the last remaining user after
>> unsharing.
>>
>> 2) khugepaged collapse[3]: Ensure no concurrent GUP-fast before collapsing
>> and (possibly) freeing the page table / re-depositing it.
>>
>> Two-step plan as David suggested[4]:
>>
>> Step 1 (this series): Skip redundant sync when we're 100% certain the TLB
>> flush sent IPIs. INVLPGB is excluded because when supported, we cannot
>> guarantee IPIs were sent, keeping it clean and simple.
>>
>> Step 2 (future work): Send targeted IPIs only to CPUs actually doing
>> software/lockless page table walks, benefiting all architectures.
>>
>> Regarding Step 2, it obviously only applies to setups where Step 1 does not
>> apply: like x86 with INVLPGB or arm64. Step 2 work is ongoing; early
>> attempts showed ~3% GUP-fast overhead. Reducing the overhead requires more
>> work and tuning; it will be submitted separately once ready.
>>
>> ...
>>
>> arch/x86/include/asm/tlb.h | 17 ++++++++++++++++-
>> arch/x86/include/asm/tlbflush.h | 2 ++
>> arch/x86/kernel/smpboot.c | 1 +
>> arch/x86/mm/tlb.c | 15 +++++++++++++++
>> include/asm-generic/tlb.h | 17 +++++++++++++++++
>> mm/mmu_gather.c | 15 +++++++++++++++
>> 6 files changed, 66 insertions(+), 1 deletion(-)
>
>Kinda straddles both MM and x86.
>
>I expect a v8 based on David's comments.

Yes, a v8 is on the way.

>One merge path is for the x86 people to take this, noting David's acks.
>
>The other merge path is via mm.git, if the x86 people can please
>perform review.
>
>And... mm.git is basically full (overflowing) for this cycle and
>review/test has some catching up to do. So I'd prefer to only take the
>important things. This patchset is a performance improvement but
>contains no measurements to demonstrate the benefit, so I'm not able to
>determine its importance!

That's a fair point. I should have included numbers from the start.

On a 64-core Intel x86 server, the CAL interrupt count in
/proc/interrupts dropped from 646,316 to 785 when collapsing a 20 GiB
range with this series applied.

The larger the system, the more costly redundant broadcast IPIs become.

Thanks,
Lance