Re: [PATCH v5] mm: introduce a new page type for page pool in page type

From: Byungchul Park

Date: Tue Mar 17 2026 - 22:03:02 EST


On Tue, Mar 17, 2026 at 10:20:08AM +0100, Jesper Dangaard Brouer wrote:
> On 16/03/2026 23.31, Byungchul Park wrote:
> > Currently, the condition 'page->pp_magic == PP_SIGNATURE' is used to
> > determine if a page belongs to a page pool. However, with the planned
> > removal of @pp_magic, we should instead leverage the page_type in struct
> > page, such as PGTY_netpp, for this purpose.
> >
> > Introduce and use the page type APIs e.g. PageNetpp(), __SetPageNetpp(),
> > and __ClearPageNetpp() instead, and remove the existing APIs accessing
> > @pp_magic e.g. page_pool_page_is_pp(), netmem_or_pp_magic(), and
> > netmem_clear_pp_magic().
> >
> > Plus, add @page_type to struct net_iov at the same offset as struct page
> > so as to use the page_type APIs for struct net_iov as well. While at it,
> > reorder @type and @owner in struct net_iov to avoid a hole and
> > increasing the struct size.
> >
> > This work was inspired by the following link:
> >
> > https://lore.kernel.org/all/582f41c0-2742-4400-9c81-0d46bf4e8314@xxxxxxxxx/
> >
> > While at it, move the sanity check for page pool to on the free path.
> >
> [...]
> see below
>
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 9f2fe46ff69a1..ee81f5c67c18f 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1044,7 +1044,6 @@ static inline bool page_expected_state(struct page *page,
> > #ifdef CONFIG_MEMCG
> > page->memcg_data |
> > #endif
> > - page_pool_page_is_pp(page) |
> > (page->flags.f & check_flags)))
> > return false;
> >
> > @@ -1071,8 +1070,6 @@ static const char *page_bad_reason(struct page *page, unsigned long flags)
> > if (unlikely(page->memcg_data))
> > bad_reason = "page still charged to cgroup";
> > #endif
> > - if (unlikely(page_pool_page_is_pp(page)))
> > - bad_reason = "page_pool leak";
> > return bad_reason;
> > }
> >
> > @@ -1381,9 +1378,17 @@ __always_inline bool __free_pages_prepare(struct page *page,
> > mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> > folio->mapping = NULL;
> > }
> > - if (unlikely(page_has_type(page)))
> > + if (unlikely(page_has_type(page))) {
> > + /* networking expects to clear its page type before releasing */
> > + if (is_check_pages_enabled()) {
> > + if (unlikely(PageNetpp(page))) {
> > + bad_page(page, "page_pool leak");
> > + return false;
> > + }
> > + }
> > /* Reset the page_type (which overlays _mapcount) */
> > page->page_type = UINT_MAX;
> > + }
>
> I need some opinions here. When CONFIG_DEBUG_VM is enabled we get help
> finding (hard to find) page_pool leaks and mark the page as bad.
>
> When CONFIG_DEBUG_VM is disabled we silently "allow" leaking.
> Should we handle this more gracefully here? (release pp resources)
>
> Allowing the page to be reused at this point, when a page_pool instance
> is known to track life-time and (likely) have the page DMA mapped, seems
> problematic to me. Code only clears page->page_type, but IIRC Toke
> added DMA tracking in other fields (plus xarray internally).
>
> I was going to suggest to simply re-export page_pool_release_page() that
> was hidden by 535b9c61bdef ("net: page_pool: hide
> page_pool_release_page()"), but that functionality was lost in
> 07e0c7d3179d ("net: page_pool: merge page_pool_release_page() with
> page_pool_return_page()"). (Cc Jakub/Kuba for that choice).
> Simply page_pool_release_page(page->pp, page).
>
> Opinions?

This is not an issue related to this patch but it's worth discussing. :)

Not only page pool state but any bad states checks are also skipped with
is_check_pages_enabled() off, which means they are going to be
suppressed with the off.

It is necessary to first clarify whether the action was intentional or
not. According to the answer, we can discuss what to do better. :)

Byungchul
>
> --Jesper