[PATCH] bpf: fix umin/umax when lower bits fall outside u32 range

From: Helen Koike

Date: Fri Mar 27 2026 - 15:57:20 EST


Fix the case where min/max adjustments on a conditional statement cross
u32 boundaries.

The case where this bug was found has the following scenario (see full
bpf prog below):

insn 2: (bf) r0 = r2
...
insn 4: (3d) if r3 >= r0 goto pc+1
...
insn 9: (67) r2 <<= 7
...
insn 13: (65) if r7 s> 2 goto pc-12 # back to insn 2

Since this is a loop, at some later iteration, both
u32_min_value/u32_max_value of r0 converge to 0 due to the shift of r2
on insn 9 and later the assignment to r0 on insn 2.

When evaluating the branch FALSE on the conditional jump of insn 4,
since it needs to respect r0 > r3, it elevates umin of r0 by 1:

r0.umin = r3.umin + 1

This is done by this part of the code:

static void regs_refine_cond_op(...) {
...
case BPF_JLT:
...
u64 new_umin = max(reg1->umin_value + 1, reg2->umin_value);

But the problem is that u32_min_value/u32_max_value are zero, so it
needs to be adjusted by reg_bounds_sync(), which calls
__reg_deduce_mixed_bounds() that performs the following tightening:

new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
...
reg->umin_value = max_t(u64, reg->umin_value, new_umin);

Here it was hitting the situation of: u64=[0x1,0xfffffff800000000] u32=[0x0,0x0]
Leading to:
new_umin = 0
reg->umin_value = max(1, 0) = 1

Which is inconsistent because: (u32)reg->umin_value > reg->u32_max_value.

That propagates until it reaches a:

verifier bug: REG INVARIANTS VIOLATION (true_reg2): range bounds violation
u64=[0x0, 0x7800000000] s64=[0x0, 0xffffffffffffffff]
u32=[0x80000000, 0x0] s32=[0x0, 0xffffffff] var_off=(0x0, 0x7800000000)


Fix __reg_deduce_mixed_bounds() to detect the case when the u32
boundaries are crossed and advance umin to the next 32-bit block (or
retreat umax to the previous one).

Example:

If we have:
u64=[0x1,0xfffffff800000000] u32=[0x0,0x0]
Go to the next 32-bit block:
u64=[0x100000000,0xfffffff800000000] u32=[0x0,0x0]

This way we keep the consistency on the lower 32 bits boundaries.
The same applies for the max value, but go to the previous 32-bit block.


Full BPF prog for reference:
============================

NOTE: this was extracted from a reproducer of Syzbot from another bug
report.

insn 0: (61) r2 = *(u32 *)(r1 + 48)
insn 1: (61) r3 = *(u32 *)(r1 + 76)
insn 2: (bf) r0 = r2
insn 3: (16) if w0 == 21502727 goto pc+16
insn 4: (3d) if r3 >= r0 goto pc+1
insn 5: (0f) r2 += r0
insn 6: (bc) w6 = (s16)w2
insn 7: (bf) r7 = (s32)r6
insn 8: (16) if w2 == 524047 goto pc+0
insn 9: (67) r2 <<= 7
insn 10: (36) if w6 >= -268376562 goto pc+0
insn 11: (bf) r5 = r0
insn 12: (0f) r5 += r6
insn 13: (65) if r7 s> 2 goto pc-12
insn 14: (07) r7 += 4194380
insn 15: (1f) r5 -= r7
insn 16: (bf) r4 = r5
insn 17: (07) r5 += -458749
insn 18: (ad) if r3 < r4 goto pc+1
insn 19: (95) exit
insn 20: (05) goto pc+0
insn 21: (95) exit
insn 22: (4d) if r11 & r9 goto pc-28203
insn 23: (99) if w3 >= -2094934247 goto pc-30763

The reproducer can be found at:
https://syzkaller.appspot.com/text?tag=ReproC&x=16773406580000

Fixes: c51d5ad6543c ("bpf: improve deduction of 64-bit bounds from 32-bit bounds")
Signed-off-by: Helen Koike <koike@xxxxxxxxxx>

---

Hi all,

I'm not familiar with the verifier code base or discussions around it,
but I tried to do my best, I searched a bit if this was already
discussed, but the discussions I found didn't seem related.

Please let me know if this case was already discussed and/or if I should
follow a different path into solving this.

Tested on bpf/master branch.

Thanks in advance.
---
kernel/bpf/verifier.c | 26 +++++++++++++++++++++++---
1 file changed, 23 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index a965b2c45bbe..ddac09c8a9e5 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -2702,9 +2702,29 @@ static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
__u64 new_umin, new_umax;
__s64 new_smin, new_smax;

- /* u32 -> u64 tightening, it's always well-formed */
- new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
- new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
+ /*
+ * If (u32)umin > u32_max, no value in the current upper-32-bit block
+ * satisfies [u32_min, u32_max] while being >= umin; advance umin to
+ * the next block. Otherwise apply standard u32->u64 tightening.
+ */
+ if ((u32)reg->umin_value > reg->u32_max_value)
+ new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
+ reg->u32_min_value;
+ else
+ new_umin = (reg->umin_value & ~0xffffffffULL) |
+ reg->u32_min_value;
+
+ /*
+ * Symmetrically, if (u32)umax < u32_min, retreat umax to the
+ * previous block. Otherwise apply standard u32->u64 tightening.
+ */
+ if ((u32)reg->umax_value < reg->u32_min_value)
+ new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
+ reg->u32_max_value;
+ else
+ new_umax = (reg->umax_value & ~0xffffffffULL) |
+ reg->u32_max_value;
+
reg->umin_value = max_t(u64, reg->umin_value, new_umin);
reg->umax_value = min_t(u64, reg->umax_value, new_umax);
/* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
--
2.53.0