Re: [PATCH 1/4] mm/mprotect: encourage inlining with __always_inline

From: Lorenzo Stoakes (Oracle)

Date: Thu Mar 19 2026 - 15:00:28 EST


On Thu, Mar 19, 2026 at 06:31:05PM +0000, Pedro Falcato wrote:
> Encourage the compiler to inline batch PTE logic and resolve constant
> branches by adding __always_inline strategically.

>
> Signed-off-by: Pedro Falcato <pfalcato@xxxxxxx>

Does this vary by compiler/arch that much?

I wonder about how much ends up on the stack here too given the HUGE number of
arguments passed around but I guess you'd be pushing and popping some even if
these weren't inlined.

I wonder if we wouldn't want to carefully check different arches for this
though!

> ---
> mm/mprotect.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 9681f055b9fc..1bd0d4aa07c2 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -103,7 +103,7 @@ bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr,
> return can_change_shared_pte_writable(vma, pte);
> }
>
> -static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> +static __always_inline int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> pte_t pte, int max_nr_ptes, fpb_t flags)
> {
> /* No underlying folio, so cannot batch */
> @@ -117,9 +117,9 @@ static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
> }
>
> /* Set nr_ptes number of ptes, starting from idx */
> -static void prot_commit_flush_ptes(struct vm_area_struct *vma, unsigned long addr,
> - pte_t *ptep, pte_t oldpte, pte_t ptent, int nr_ptes,
> - int idx, bool set_write, struct mmu_gather *tlb)
> +static __always_inline void prot_commit_flush_ptes(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *ptep, pte_t oldpte, pte_t ptent,
> + int nr_ptes, int idx, bool set_write, struct mmu_gather *tlb)
> {
> /*
> * Advance the position in the batch by idx; note that if idx > 0,
> @@ -169,7 +169,7 @@ static int page_anon_exclusive_sub_batch(int start_idx, int max_len,
> * pte of the batch. Therefore, we must individually check all pages and
> * retrieve sub-batches.
> */
> -static void commit_anon_folio_batch(struct vm_area_struct *vma,
> +static __always_inline void commit_anon_folio_batch(struct vm_area_struct *vma,
> struct folio *folio, struct page *first_page, unsigned long addr, pte_t *ptep,
> pte_t oldpte, pte_t ptent, int nr_ptes, struct mmu_gather *tlb)
> {
> --
> 2.53.0
>