[PATCH v3 0/5] Optimize mprotect() for large folios

From: Dev Jain
Date: Mon May 19 2025 - 03:48:52 EST


This patchset optimizes the mprotect() system call for large folios
by PTE-batching.

We use the following test cases to measure performance, mprotect()'ing
the mapped memory to read-only then read-write 40 times:

Test case 1: Mapping 1G of memory, touching it to get PMD-THPs, then
pte-mapping those THPs
Test case 2: Mapping 1G of memory with 64K mTHPs
Test case 3: Mapping 1G of memory with 4K pages

Average execution time on arm64, Apple M3:
Before the patchset:
T1: 7.9 seconds T2: 7.9 seconds T3: 4.2 seconds

After the patchset:
T1: 2.1 seconds T2: 2.2 seconds T3: 4.3 seconds

Observing T1/T2 and T3 before the patchset, we also remove the regression
introduced by ptep_get() on a contpte block. And, for large folios we get
an almost 74% performance improvement, albeit the trade-off being a slight
degradation in the small folio case.

Here is the test program:

#define _GNU_SOURCE
#include <sys/mman.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <unistd.h>

#define SIZE (1024*1024*1024)

unsigned long pmdsize = (1UL << 21);
unsigned long pagesize = (1UL << 12);

static void pte_map_thps(char *mem, size_t size)
{
size_t offs;
int ret = 0;


/* PTE-map each THP by temporarily splitting the VMAs. */
for (offs = 0; offs < size; offs += pmdsize) {
ret |= madvise(mem + offs, pagesize, MADV_DONTFORK);
ret |= madvise(mem + offs, pagesize, MADV_DOFORK);
}

if (ret) {
fprintf(stderr, "ERROR: mprotect() failed\n");
exit(1);
}
}

int main(int argc, char *argv[])
{
char *p;
int ret = 0;
p = mmap((1UL << 30), SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (p != (1UL << 30)) {
perror("mmap");
return 1;
}



memset(p, 0, SIZE);
if (madvise(p, SIZE, MADV_NOHUGEPAGE))
perror("madvise");
explicit_bzero(p, SIZE);
pte_map_thps(p, SIZE);

for (int loops = 0; loops < 40; loops++) {
if (mprotect(p, SIZE, PROT_READ))
perror("mprotect"), exit(1);
if (mprotect(p, SIZE, PROT_READ|PROT_WRITE))
perror("mprotect"), exit(1);
explicit_bzero(p, SIZE);
}
}

The patchset is rebased onto mm-unstable (9ead4336d7c07f085def6ab372245640a22af5bd).

v2->v3:
- Add comments for the new APIs (Ryan, Lorenzo)
- Instead of refactoring, use a "skip_batch" label
- Move arm64 patches at the end (Ryan)
- In can_change_pte_writable(), check AnonExclusive page-by-page (David H)
- Resolve implicit declaration; tested build on x86 (Lance Yang)

v1->v2:
- Rebase onto mm-unstable (6ebffe676fcf: util_macros.h: make the header more resilient)
- Abridge the anon-exclusive condition (Lance Yang)

Dev Jain (5):
mm: Optimize mprotect() by batch-skipping PTEs
mm: Add batched versions of ptep_modify_prot_start/commit
mm: Optimize mprotect() by PTE batching
arm64: Add batched version of ptep_modify_prot_start
arm64: Add batched version of ptep_modify_prot_commit

arch/arm64/include/asm/pgtable.h | 10 ++
arch/arm64/mm/mmu.c | 21 ++++-
include/linux/mm.h | 7 +-
include/linux/pgtable.h | 79 ++++++++++++++++
mm/mprotect.c | 154 +++++++++++++++++++++++++------
mm/pgtable-generic.c | 16 +++-
6 files changed, 246 insertions(+), 41 deletions(-)

--
2.30.2