Re: [PATCH v2 1/3] mm/page_alloc: Optimize free_contig_range()
From: Zi Yan
Date: Mon Mar 16 2026 - 12:03:45 EST
On 16 Mar 2026, at 11:21, Vlastimil Babka wrote:
> On 3/16/26 12:31, Muhammad Usama Anjum wrote:
>> From: Ryan Roberts <ryan.roberts@xxxxxxx>
>>
>> Decompose the range of order-0 pages to be freed into the set of largest
>> possible power-of-2 size and aligned chunks and free them to the pcp or
>> buddy. This improves on the previous approach which freed each order-0
>> page individually in a loop. Testing shows performance to be improved by
>> more than 10x in some cases.
>>
>> Since each page is order-0, we must decrement each page's reference
>> count individually and only consider the page for freeing as part of a
>> high order chunk if the reference count goes to zero. Additionally
>> free_pages_prepare() must be called for each individual order-0 page
>> too, so that the struct page state and global accounting state can be
>> appropriately managed. But once this is done, the resulting high order
>> chunks can be freed as a unit to the pcp or buddy.
>>
>> This significantly speeds up the free operation but also has the side
>> benefit that high order blocks are added to the pcp instead of each page
>> ending up on the pcp order-0 list; memory remains more readily available
>> in high orders.
>>
>> vmalloc will shortly become a user of this new optimized
>> free_contig_range() since it aggressively allocates high order
>> non-compound pages, but then calls split_page() to end up with
>> contiguous order-0 pages. These can now be freed much more efficiently.
>>
>> The execution time of the following function was measured in a server
>> class arm64 machine:
>>
>> static int page_alloc_high_order_test(void)
>> {
>> unsigned int order = HPAGE_PMD_ORDER;
>> struct page *page;
>> int i;
>>
>> for (i = 0; i < 100000; i++) {
>> page = alloc_pages(GFP_KERNEL, order);
>> if (!page)
>> return -1;
>> split_page(page, order);
>> free_contig_range(page_to_pfn(page), 1UL << order);
>> }
>>
>> return 0;
>> }
>>
>> Execution time before: 4097358 usec
>> Execution time after: 729831 usec
>>
>> Perf trace before:
>>
>> 99.63% 0.00% kthreadd [kernel.kallsyms] [.] kthread
>> |
>> ---kthread
>> 0xffffb33c12a26af8
>> |
>> |--98.13%--0xffffb33c12a26060
>> | |
>> | |--97.37%--free_contig_range
>> | | |
>> | | |--94.93%--___free_pages
>> | | | |
>> | | | |--55.42%--__free_frozen_pages
>> | | | | |
>> | | | | --43.20%--free_frozen_page_commit
>> | | | | |
>> | | | | --35.37%--_raw_spin_unlock_irqrestore
>> | | | |
>> | | | |--11.53%--_raw_spin_trylock
>> | | | |
>> | | | |--8.19%--__preempt_count_dec_and_test
>> | | | |
>> | | | |--5.64%--_raw_spin_unlock
>> | | | |
>> | | | |--2.37%--__get_pfnblock_flags_mask.isra.0
>> | | | |
>> | | | --1.07%--free_frozen_page_commit
>> | | |
>> | | --1.54%--__free_frozen_pages
>> | |
>> | --0.77%--___free_pages
>> |
>> --0.98%--0xffffb33c12a26078
>> alloc_pages_noprof
>>
>> Perf trace after:
>>
>> 8.42% 2.90% kthreadd [kernel.kallsyms] [k] __free_contig_range
>> |
>> |--5.52%--__free_contig_range
>> | |
>> | |--5.00%--free_prepared_contig_range
>> | | |
>> | | |--1.43%--__free_frozen_pages
>> | | | |
>> | | | --0.51%--free_frozen_page_commit
>> | | |
>> | | |--1.08%--_raw_spin_trylock
>> | | |
>> | | --0.89%--_raw_spin_unlock
>> | |
>> | --0.52%--free_pages_prepare
>> |
>> --2.90%--ret_from_fork
>> kthread
>> 0xffffae1c12abeaf8
>> 0xffffae1c12abe7a0
>> |
>> --2.69%--vfree
>> __free_contig_range
>>
>> Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>
>> Co-developed-by: Muhammad Usama Anjum <usama.anjum@xxxxxxx>
>> Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxx>
>> ---
>> Changes since v1:
>> - Rebase on mm-new
>> - Move FPI_PREPARED check inside __free_pages_prepare() now that
>> fpi_flags are already being passed.
>> - Add todo (Zi Yan)
>> - Rerun benchmarks
>> - Convert VM_BUG_ON_PAGE() to VM_WARN_ON_ONCE()
>> - Rework order calculation in free_prepared_contig_range() and use
>> MAX_PAGE_ORDER as high limit instead of pageblock_order as it must
>> be up to internal __free_frozen_pages() how it frees them
>> ---
>> include/linux/gfp.h | 2 +
>> mm/page_alloc.c | 110 ++++++++++++++++++++++++++++++++++++++++++--
>> 2 files changed, 108 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/linux/gfp.h b/include/linux/gfp.h
>> index f82d74a77cad8..96ac7aae370c4 100644
>> --- a/include/linux/gfp.h
>> +++ b/include/linux/gfp.h
>> @@ -467,6 +467,8 @@ void free_contig_frozen_range(unsigned long pfn, unsigned long nr_pages);
>> void free_contig_range(unsigned long pfn, unsigned long nr_pages);
>> #endif
>>
>> +unsigned long __free_contig_range(unsigned long pfn, unsigned long nr_pages);
>> +
>> DEFINE_FREE(free_page, void *, free_page((unsigned long)_T))
>>
>> #endif /* __LINUX_GFP_H */
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 75ee81445640b..6a9430f720579 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -91,6 +91,13 @@ typedef int __bitwise fpi_t;
>> /* Free the page without taking locks. Rely on trylock only. */
>> #define FPI_TRYLOCK ((__force fpi_t)BIT(2))
>>
>> +/*
>> + * free_pages_prepare() has already been called for page(s) being freed.
>> + * TODO: Perform per-subpage free_pages_prepare() checks for order > 0 pages
>> + * (HWPoison, PageNetpp, bad free page).
>> + */
>
> I'm confused, and reading the v1 thread didn't help either. Where would the
> subpages to check come from? AFAICS we start from order-0 pages always.
> __free_contig_range calls free_pages_prepare on every page with order 0
> unconditionally, so we check every page as an order-0 page. If we then free
> the bunch of individually checked pages as a high-order page, there's no
> reason to check those subpages again, no? Am I missing something?
There are two kinds of order > 0 pages, compound and not compound.
free_pages_prepare() checks all tail pages of a compound order > 0 pages too.
For non compound ones, free_pages_prepare() only has free_page_is_bad()
check on tail ones.
So my guess is that the TODO is to check all subpages on a non compound
order > 0 one in the same manner. This is based on the assumption that
all non compound order > 0 page users use split_page() after the allocation,
treat each page individually, and free them back altogether. But I am not
sure if this is true for all users allocating non compound order > 0 pages.
And free_pages_prepare_bulk() might be a better name for such functions.
The above confusion is also a reason I asked Ryan to try adding a unsplit_page()
function to fuse back non compound order > 0 pages and free the fused one
as we are currently doing. But that looks like a pain to implment. Maybe an
alternative to this FPI_PREPARED is to add FPI_FREE_BULK and loop through all
subpages if FPI_FREE_BULK is set with
__free_pages_prepare(page + i, 0, fpi_flags & ~FPI_FREE_BULK) in
__free_pages_ok().
Best Regards,
Yan, Zi