[PATCH] mm/mm_init: Don't iterate pages below ARCH_PFN_OFFSET
From: Ruihan Li
Date: Fri Apr 18 2025 - 12:32:38 EST
Currently, memmap_init initializes pfn_hole with 0 instead of
ARCH_PFN_OFFSET. Then init_unavailable_range will start iterating each
page from the page at address zero to the first available page, but it
won't do anything for pages below ARCH_PFN_OFFSET because pfn_valid
won't pass.
If ARCH_PFN_OFFSET is very large (e.g., something like 2^64-2GiB if the
kernel is used as a library and loaded at a very high address), the
pointless iteration for pages below ARCH_PFN_OFFSET will take a very
long time, and the kernel will look stuck at boot time.
This commit sets the initial value of pfn_hole to ARCH_PFN_OFFSET, which
avoids the problematic and useless iteration mentioned above.
Fixes: 907ec5fca3dc ("mm: zero remaining unavailable struct pages")
Signed-off-by: Ruihan Li <lrh2000@xxxxxxxxxx>
---
mm/mm_init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 84f14fa12..b3ae9f797 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -966,7 +966,7 @@ static void __init memmap_init_zone_range(struct zone *zone,
static void __init memmap_init(void)
{
unsigned long start_pfn, end_pfn;
- unsigned long hole_pfn = 0;
+ unsigned long hole_pfn = ARCH_PFN_OFFSET;
int i, j, zone_id = 0, nid;
for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
--
2.49.0