[PATCH] sched_ext: Use READ_ONCE() for bypass_depth in scx_bypass()

From: zhidao su

Date: Mon Mar 23 2026 - 07:24:27 EST


In scx_bypass(), pos->bypass_depth is read under scx_sched_lock to
update per-cpu BYPASSING flags:

raw_spin_lock(&scx_sched_lock);
scx_for_each_descendant_pre(pos, sch) {
if (pos->bypass_depth) /* bare read */

However, bypass_depth is written by inc_bypass_depth() and
dec_bypass_depth() under scx_bypass_lock (a different lock):

lockdep_assert_held(&scx_bypass_lock);
WRITE_ONCE(sch->bypass_depth, sch->bypass_depth + 1);

The read side here holds scx_sched_lock, not scx_bypass_lock, so the
two sides are protected by different locks, constituting a lockless
concurrent access. All other read sites consistently use READ_ONCE():

L1237: if (unlikely(READ_ONCE(sch->bypass_depth)))
L4083: while (likely(!READ_ONCE(sch->bypass_depth)))

Add the missing READ_ONCE() for consistency with the rest of the code
and to make the concurrency intent explicit to KCSAN.

Signed-off-by: zhidao su <suzhidao@xxxxxxxxxx>
---
kernel/sched/ext.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
index 72a07eb050a3..2654690e3661 100644
--- a/kernel/sched/ext.c
+++ b/kernel/sched/ext.c
@@ -5343,7 +5343,7 @@ static void scx_bypass(struct scx_sched *sch, bool bypass)
scx_for_each_descendant_pre(pos, sch) {
struct scx_sched_pcpu *pcpu = per_cpu_ptr(pos->pcpu, cpu);

- if (pos->bypass_depth)
+ if (READ_ONCE(pos->bypass_depth))
pcpu->flags |= SCX_SCHED_PCPU_BYPASSING;
else
pcpu->flags &= ~SCX_SCHED_PCPU_BYPASSING;
--
2.43.0