Re: [PATCH] bpf: fix umin/umax when lower bits fall outside u32 range
From: Eduard Zingerman
Date: Fri Mar 27 2026 - 16:53:53 EST
On Fri, 2026-03-27 at 16:48 -0300, Helen Koike wrote:
[...]
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index a965b2c45bbe..ddac09c8a9e5 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2702,9 +2702,29 @@ static void __reg_deduce_mixed_bounds(struct bpf_reg_state *reg)
> __u64 new_umin, new_umax;
> __s64 new_smin, new_smax;
>
> - /* u32 -> u64 tightening, it's always well-formed */
> - new_umin = (reg->umin_value & ~0xffffffffULL) | reg->u32_min_value;
> - new_umax = (reg->umax_value & ~0xffffffffULL) | reg->u32_max_value;
> + /*
> + * If (u32)umin > u32_max, no value in the current upper-32-bit block
> + * satisfies [u32_min, u32_max] while being >= umin; advance umin to
> + * the next block. Otherwise apply standard u32->u64 tightening.
> + */
> + if ((u32)reg->umin_value > reg->u32_max_value)
> + new_umin = (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) |
> + reg->u32_min_value;
> + else
> + new_umin = (reg->umin_value & ~0xffffffffULL) |
> + reg->u32_min_value;
What would happen if there is no next or previous 32-bit block?
E.g. if (reg->umin_value & ~0xffffffffULL) + (1ULL << 32) wraps around.
I guess the argument is that if this happens, there is an invariant
violation already, will the final register still bin in invariant
violation state?
Useful picture:
N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32
||----|=====|--|----------||----|=====|-------------||--|-|=====|-------------||
|< b >| | |< b >| | |< b >|
| | | |
|<---------------+- a -+---------------->|
| |
|< t >| refined r0 range
Also, as this is based solely on unsigned ranges, the following case
is not covered, right?
N*2^32 (N+1)*2^32 (N+2)*2^32 (N+3)*2^32
||===|---------|------|===||===|----------------|===||===|---------|------|===||
|b >| | |< b||b >| |< b||b >| | |< b|
| | | |
|<-----+----------------- a --------------+-------->|
| |
|<---------------- t ------------->| refined r0 range
Would it be hard to implementing something in the same line as [2] to cover it?
> +
> + /*
> + * Symmetrically, if (u32)umax < u32_min, retreat umax to the
> + * previous block. Otherwise apply standard u32->u64 tightening.
> + */
> + if ((u32)reg->umax_value < reg->u32_min_value)
> + new_umax = (reg->umax_value & ~0xffffffffULL) - (1ULL << 32) |
> + reg->u32_max_value;
> + else
> + new_umax = (reg->umax_value & ~0xffffffffULL) |
> + reg->u32_max_value;
> +
> reg->umin_value = max_t(u64, reg->umin_value, new_umin);
> reg->umax_value = min_t(u64, reg->umax_value, new_umax);
> /* u32 -> s64 tightening, u32 range embedded into s64 preserves range validity */
I think we can move forward with this, as the fate of my RFC is
uncertain. Please add some selftests, e.g. from [1].
[1] https://lore.kernel.org/bpf/20260318-cnum-sync-bounds-v1-4-1f2e455ea711@xxxxxxxxx/
[2] https://github.com/eddyz87/cnum-verif/blob/master/cnum.c#L242