Re: [PATCH v2] nvme: fix memory allocation in nvme_pr_read_keys()
From: Sungwoo Kim
Date: Fri Mar 20 2026 - 11:41:53 EST
On Fri, Mar 20, 2026 at 7:23 AM Dan Carpenter <dan.carpenter@xxxxxxxxxx> wrote:
>
> We were reveiwing CVEs and this patch doesn't really fix the problem.
Thank you for reviewing the CVE and this patch. I still believe that
this patch is correct.
>
> On Fri, Feb 27, 2026 at 07:19:28PM -0500, Sungwoo Kim wrote:
> > nvme_pr_read_keys() takes num_keys from userspace and uses it to
> > calculate the allocation size for rse via struct_size(). The upper
> > limit is PR_KEYS_MAX (64K).
> >
> > A malicious or buggy userspace can pass a large num_keys value that
> > results in a 4MB allocation attempt at most, causing a warning in
> > the page allocator when the order exceeds MAX_PAGE_ORDER.
> >
> > To fix this, use kvzalloc() instead of kzalloc().
> >
> > This bug has the same reasoning and fix with the patch below:
> > https://lore.kernel.org/linux-block/20251212013510.3576091-1-kartikey406@xxxxxxxxx/
>
> We never merged this patch. The fix that went in was correct.
> It is commit a58383fa45c7 ("block: add allocation size check in
> blkdev_pr_read_keys()").
My bad. Thanks for referring to the correct commit ID.
>
> >
> > Warning log:
> > WARNING: mm/page_alloc.c:5216 at __alloc_frozen_pages_noprof+0x5aa/0x2300 mm/page_alloc.c:5216, CPU#1: syz-executor117/272
> > Modules linked in:
> > CPU: 1 UID: 0 PID: 272 Comm: syz-executor117 Not tainted 6.19.0 #1 PREEMPT(voluntary)
> > Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.3-0-ga6ed6b701f0a-prebuilt.qemu.org 04/01/2014
> > RIP: 0010:__alloc_frozen_pages_noprof+0x5aa/0x2300 mm/page_alloc.c:5216
> > Code: ff 83 bd a8 fe ff ff 0a 0f 86 69 fb ff ff 0f b6 1d f9 f9 c4 04 80 fb 01 0f 87 3b 76 30 ff 83 e3 01 75 09 c6 05 e4 f9 c4 04 01 <0f> 0b 48 c7 85 70 fe ff ff 00 00 00 00 e9 8f fd ff ff 31 c0 e9 0d
> > RSP: 0018:ffffc90000fcf450 EFLAGS: 00010246
> > RAX: 0000000000000000 RBX: 0000000000000000 RCX: 1ffff920001f9ea0
> > RDX: 0000000000000000 RSI: 000000000000000b RDI: 0000000000040dc0
> > RBP: ffffc90000fcf648 R08: ffff88800b6c3380 R09: 0000000000000001
> > R10: ffffc90000fcf840 R11: ffff88807ffad280 R12: 0000000000000000
> > R13: 0000000000040dc0 R14: 0000000000000001 R15: ffffc90000fcf620
> > FS: 0000555565db33c0(0000) GS:ffff8880be26c000(0000) knlGS:0000000000000000
> > CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> > CR2: 000000002000000c CR3: 0000000003b72000 CR4: 00000000000006f0
> > Call Trace:
> > <TASK>
> > alloc_pages_mpol+0x236/0x4d0 mm/mempolicy.c:2486
> > alloc_frozen_pages_noprof+0x149/0x180 mm/mempolicy.c:2557
> > ___kmalloc_large_node+0x10c/0x140 mm/slub.c:5598
> > __kmalloc_large_node_noprof+0x25/0xc0 mm/slub.c:5629
> > __do_kmalloc_node mm/slub.c:5645 [inline]
> > __kmalloc_noprof+0x483/0x6f0 mm/slub.c:5669
> > kmalloc_noprof include/linux/slab.h:961 [inline]
> > kzalloc_noprof include/linux/slab.h:1094 [inline]
> > nvme_pr_read_keys+0x8f/0x4c0 drivers/nvme/host/pr.c:245
> > blkdev_pr_read_keys block/ioctl.c:456 [inline]
> > blkdev_common_ioctl+0x1b71/0x29b0 block/ioctl.c:730
> > blkdev_ioctl+0x299/0x700 block/ioctl.c:786
> > vfs_ioctl fs/ioctl.c:51 [inline]
> > __do_sys_ioctl fs/ioctl.c:597 [inline]
> > __se_sys_ioctl fs/ioctl.c:583 [inline]
> > __x64_sys_ioctl+0x1bf/0x220 fs/ioctl.c:583
> > x64_sys_call+0x1280/0x21b0 mnt/fuzznvme_1/fuzznvme/linux-build/v6.19/./arch/x86/include/generated/asm/syscalls_64.h:17
> > do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
> > do_syscall_64+0x71/0x330 arch/x86/entry/syscall_64.c:94
> > entry_SYSCALL_64_after_hwframe+0x76/0x7e
> > RIP: 0033:0x7fb893d3108d
> > Code: 28 c3 e8 46 1e 00 00 66 0f 1f 44 00 00 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
> > RSP: 002b:00007ffff61f2f38 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
> > RAX: ffffffffffffffda RBX: 00007ffff61f3138 RCX: 00007fb893d3108d
> > RDX: 0000000020000040 RSI: 00000000c01070ce RDI: 0000000000000003
> > RBP: 0000000000000001 R08: 0000000000000000 R09: 00007ffff61f3138
> > R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
> > R13: 00007ffff61f3128 R14: 00007fb893dae530 R15: 0000000000000001
> > </TASK>
> >
> > Fixes: 5fd96a4e15de (nvme: Add pr_ops read_keys support)
> > Acked-by: Chao Shi <cshi008@xxxxxxx>
> > Acked-by: Weidong Zhu <weizhu@xxxxxxx>
> > Acked-by: Dave Tian <daveti@xxxxxxxxxx>
> > Signed-off-by: Sungwoo Kim <iam@xxxxxxxxxxxx>
> > ---
> > v2: add missing kvfree
> >
> > drivers/nvme/host/pr.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/nvme/host/pr.c b/drivers/nvme/host/pr.c
> > index ad2ecc2f49a97..fe7dbe2648158 100644
> > --- a/drivers/nvme/host/pr.c
> > +++ b/drivers/nvme/host/pr.c
> > @@ -242,7 +242,7 @@ static int nvme_pr_read_keys(struct block_device *bdev,
> > if (rse_len > U32_MAX)
> > return -EINVAL;
>
> This "rse_len > U32_MAX" check is kind of nonsense. Anything larger
This check is pre-existing and irrelevant to this bug.
> than INT_MAX will trigger a stack trace (which is the bug that this
> patch is trying to fix).
No. This patch fixes a warning that occurs when get_order(rse_len) >
MAX_PAGE_ORDER.
>
> Copy the other fix for blkdev_pr_read_keys().
This patch already copied the fix. Changing kzalloc() to kvzalloc() is
what it had done.
If I missed something, please let me know.
Best,
Sungwoo.
>
> regards,
> dan carpenter
>
> >
> > - rse = kzalloc(rse_len, GFP_KERNEL);
> > + rse = kvzalloc(rse_len, GFP_KERNEL);
> > if (!rse)
> > return -ENOMEM;
> >
> > @@ -267,7 +267,7 @@ static int nvme_pr_read_keys(struct block_device *bdev,
> > }
> >
> > free_rse:
> > - kfree(rse);
> > + kvfree(rse);
> > return ret;
> > }
> >
> > --
> > 2.47.3
> >
>