Re: [PATCH v8 0/5] arm64: support FEAT_BBM level 2 and large block mapping when rodata=full

From: Ryan Roberts

Date: Tue Mar 17 2026 - 05:13:23 EST


On 17/03/2026 08:47, Kevin Brodsky wrote:
> On 17/03/2026 01:15, Yang Shi wrote:
>>
>>
>> On 3/16/26 8:47 AM, Ryan Roberts wrote:
>>> Thanks for the report!
>>>
>>> + Kevin, who was looking at some adjacent issues and may have some
>>> ideas for how
>>> to fix.
>
> Indeed, specifically to protect page table pages (PTPs) by mapping them
> with a privileged pkey. pagetable_alloc() is called on several occasions
> before all secondaries are up (including from init_IRQ() in fact), but
> we cannot call set_memory_pkey() at that point for the same reason that
> Jinjiang pointed out.
>
> The approach I went for is to allocate a whole block on boot and defer
> the call to set_memory_pkey() until it's safe to do so [1]. It's not a
> particularly nice solution as this restricts the number of PTPs we can
> allocate in that window, especially if more of them get allocated when
> set_memory_decrypted() splits the linear map.
>
> [1]
> https://lore.kernel.org/linux-hardening/20260227175518.3728055-17-kevin.brodsky@xxxxxxx/
>
>>>
>>>
>>> On 16/03/2026 07:35, Jinjiang Tu wrote:
>>>> 在 2025/9/18 3:02, Yang Shi 写道:
>>>>> On systems with BBML2_NOABORT support, it causes the linear map to
>>>>> be mapped
>>>>> with large blocks, even when rodata=full, and leads to some nice
>>>>> performance
>>>>> improvements.
>>>> Hi,
>>
>> Hi Jinjiang,
>>
>> Thanks for reporting the problem.
>>
>>>>
>>>> I find this feature is incompatible with realm. The calltrace is as
>>>> follows:
>>>>
>>>> [    0.000000][    T0] ------------[ cut here ]------------
>>>> [    0.000000][    T0] WARNING: CPU: 0 PID: 0 at
>>>> arch/arm64/mm/pageattr.c:56
>>>> pageattr_pmd_entry+0x60/0x78
>>>> [    0.000000][    T0] Modules linked in:
>>>> [    0.000000][    T0] CPU: 0 PID: 0 Comm: swapper/0 Not tainted
>>>> 6.6.0 #16
>>>> [    0.000000][    T0] Hardware name: linux,dummy-virt (DT)
>>>> [    0.000000][    T0] pstate: 800000c5 (Nzcv daIF -PAN -UAO -TCO
>>>> -DIT -SSBS
>>>> BTYPE=--)
>>>> [    0.000000][    T0] pc : pageattr_pmd_entry+0x60/0x78
>>>> [    0.000000][    T0] lr : walk_pmd_range.isra.0+0x170/0x1f0
>>>> [    0.000000][    T0] sp : ffffcb90a0f337d0
>>>> [    0.000000][    T0] x29: ffffcb90a0f337d0 x28: 0000000000000000 x27:
>>>> ffff0000035e0000
>>>> [    0.000000][    T0] x26: ffffcb90a0f338f8 x25: ffff00001fff60d0 x24:
>>>> ffff0000035d0000
>>>> [    0.000000][    T0] x23: 0400000000000001 x22: 0c00000000000001 x21:
>>>> ffff0000035dffff
>>>> [    0.000000][    T0] x20: ffffcb909fe3b7f0 x19: ffff0000035e0000 x18:
>>>> ffffffffffffffff
>>>> [    0.000000][    T0] x17: 7220303030303178 x16: 307e303030306435 x15:
>>>> ffffcb90a0f334c8
>>>> [    0.000000][    T0] x14: 0000000000000000 x13: 205d305420202020 x12:
>>>> 5b5d303030303030
>>>> [    0.000000][    T0] x11: 00000000ffff7fff x10: 00000000ffff7fff x9 :
>>>> ffffcb909f1e27d8
>>>> [    0.000000][    T0] x8 : 00000000000bffe8 x7 : c0000000ffff7fff x6 :
>>>> 0000000000000001
>>>> [    0.000000][    T0] x5 : 0000000000000001 x4 : 0078000083400705 x3 :
>>>> ffffcb90a0f338f8
>>>> [    0.000000][    T0] x2 : 0000000000010000 x1 : ffff0000035d0000 x0 :
>>>> ffff00001fff60d0
>>>> [    0.000000][    T0] Call trace:
>>>> [    0.000000][    T0]  pageattr_pmd_entry+0x60/0x78
>>>> [    0.000000][    T0]  walk_pud_range+0x124/0x190
>>>> [    0.000000][    T0]  walk_pgd_range+0x158/0x1b0
>>>> [    0.000000][    T0]  walk_kernel_page_table_range_lockless+0x58/0x98
>>>> [    0.000000][    T0]  update_range_prot+0xb8/0x108
>>>> [    0.000000][    T0]  __change_memory_common+0x30/0x1a8
>>>> [    0.000000][    T0]  __set_memory_enc_dec.part.0+0x170/0x260
>>>> [    0.000000][    T0]  realm_set_memory_decrypted+0x6c/0xb0
>>>> [    0.000000][    T0]  set_memory_decrypted+0x38/0x58
>>>> [    0.000000][    T0]  its_alloc_pages_node+0xc4/0x140
>>>> [    0.000000][    T0]  its_probe_one+0xbc/0x3c0
>>>> [    0.000000][    T0]  its_of_probe.isra.0+0x130/0x220
>>>> [    0.000000][    T0]  its_init+0x160/0x2f8
>>>> [    0.000000][    T0]  gic_init_bases+0x1fc/0x318
>>>> [    0.000000][    T0]  gic_of_init+0x2a0/0x300
>>>> [    0.000000][    T0]  of_irq_init+0x238/0x4b8
>>>> [    0.000000][    T0]  irqchip_init+0x20/0x50
>>>> [    0.000000][    T0]  init_IRQ+0x1c/0x100
>>>> [    0.000000][    T0]  start_kernel+0x1ec/0x4f0
>>>> [    0.000000][    T0]  __primary_switched+0xbc/0xd0
>>>> [    0.000000][    T0] ---[ end trace 0000000000000000 ]---
>>>> [    0.000000][    T0] ------------[ cut here ]------------
>>>> [    0.000000][    T0] Failed to decrypt memory, 16 pages will be
>>>> leaked
>>>>
>>>> realm feature relies on rodata=full to dynamically update kernel
>>>> page table prot.
>>>>
>>>> In init_IRQ(), realm_set_memory_decrypted() is called to update
>>>> kernel page
>>>> table prot.
>>>> At this time, secondary cpus aren't booted, BBML2 noabort feature isn't
>>>> initializated,
>>>> and system_supports_bbml2_noabort() still returns false. As a result,
>>>> split_kernel_leaf_mapping() is skipped, leading to
>>>> WARN_ON_ONCE((next - addr) !=
>>>> PMD_SIZE)
>>>> in pageattr_pmd_entry().
>>> If no secondary cpus are yet running, then it is technically safe to
>>> split
>>> because we know all online cpus (i.e. just the boot cpu) supports
>>> BBML2_NOABORT.
>>> So we could explicitly only disallow splitting during the window
>>> between booting
>>> secondary cpus and finalizing the system caps. Feels a bit hacky
>>> though...
>>
>> I think we can check whether system feature has been finalized or not.
>> If it has not been finalized yet, we just need to check whether the
>> current cpu (should be just boot cpu) supports BBML2_NOABORT or not.
>> It sounds ok to me.
>
> That assumes that no secondary has booted yet, otherwise we cannot
> safely split live mappings without knowing that all CPUs support
> BBML2-noabort. It might work for this particular case, but it is
> fragile. It wouldn't help for the page table protection case, as PTPs
> get allocated while secondaries are booting up (e.g. stack allocation
> when forking kthreadd).
>
>>
>>>
>>>> Before setup_system_features(), we don't know if all cpus support
>>>> BBML2 noabort,
>>>> and we
>>>> couldn't split kernel page table, in case another cpu that doesn't
>>>> support BBML2
>>>> noabort
>>>> is running.
>>>>
>>>> How could we fix this issue?
>>>>
>>>> 1. force pte mapping if realm feature is enabled? Although
>>>> force_pte_mapping()
>>>> return true if is_realm_world() return true, arm64_rsi_init() is
>>>> called after
>>>> map_mem(). So is_realm_world() still return false during map_mem().
>>>> Thus
>>>> realm feature relies on rodata=full. If we fix by this solution, we
>>>> need
>>>> to add a new cmdline to force pte mapping.
>>
>> I don't quite get why is_realm_world() relies on rodata=full. I
>> understand realm needs PTE mapping if BBML2_NOABORT is not supported.
>> But it doesn't mean real relies on rodata=full.
>>
>>> I think we just need to make is_realm_world() work earlier in boot? I
>>> think this
>>> has been a known issue for a while. Not sure if there is any plan to
>>> fix it
>>> though.
>>>
>>>> 2. If we could try to split kernel page table before
>>>> setup_system_features()?
>>> Another option would be to initially map by pte then collapse to
>>> block mappings
>>> once we have determined that all cpus support BBML2_NOABORT. We
>>> originally opted
>>> not to do that because it's a tax on symetric systems. But we could
>>> throw in the
>>> towel if it's the least bad solution we can come up with for solving
>>> this. I
>>> think it might help some of Kevin's use cases too?
>>
>> May be an option too. When we discussed this there was no usecase for
>> direct mapping collapse. But if we can have multiple usecases, it may
>> be worth it.

I could imagine that if user space creates and destroys lots of secretmem areas,
then it will completely split the linear map to ptes and that will never recover
currently. So I think in the long term, having the ability to collapse would be
useful. I just don't particularly like forcing symetric systems to map by pte
initially (which is slow) only to collapse later (which will cost even more
time). But it does feel inherrently more robust.

>> AFAICT, the ROX execmem cache may need this, which Will
>> or someone else from Google is going to work on.
>
> Not sure about the execmem cache (do we call execmem_alloc() before
> secondaries are up?), but I think that would indeed solve the issue for
> the page table protection use-case. Besides in terms of complexity it's
> probably not much worse than what we currently have, i.e. basically the
> reverse (splitting the linear map if some CPU doesn't have
> BBML2-noabort). Penalising symmetric systems is not great, though.
>
> - Kevin
>
>>
>> Checking current cpu BBML2_NOABORT capability before system feature is
>> finalized seems like a fast way to stop bleeding IMHO before we find
>> more elegant long-term solution.
>>
>> Thanks,
>> Yang
>>
>>> [...]