Re: [PATCH sched_ext/for-7.1] sched_ext: idle: Prioritize idle SMT sibling

From: Andrea Righi

Date: Fri Mar 20 2026 - 12:28:05 EST


Hi Cheng-Yang,

On Wed, Mar 18, 2026 at 09:11:29AM +0800, Cheng-Yang Chou wrote:
> Hi Andrea,
>
> On Wed, Mar 18, 2026 at 01:38:42AM +0100, Andrea Righi wrote:
> > In the default built-in idle CPU selection policy, when @prev_cpu is
> > busy and no fully idle core is available, try to place the task on its
> > SMT sibling if that sibling is idle, before searching any other idle CPU
> > in the same LLC.
> >
> > Migration to the sibling is cheap and keeps the task on the same core,
> > preserving L1 cache and reducing wakeup latency.
> >
> > On large SMT systems this appears to consistently boost throughput by
> > roughly 2-3% on CPU-bound workloads (running a number of tasks equal to
> > the number of SMT cores).
> >
> > Signed-off-by: Andrea Righi <arighi@xxxxxxxxxx>
> > ---
> > kernel/sched/ext_idle.c | 12 ++++++++++++
> > 1 file changed, 12 insertions(+)
> >
> > diff --git a/kernel/sched/ext_idle.c b/kernel/sched/ext_idle.c
> > index c7e4052626979..e0c57355b33b8 100644
> > --- a/kernel/sched/ext_idle.c
> > +++ b/kernel/sched/ext_idle.c
> > @@ -616,6 +616,18 @@ s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu, u64 wake_flags,
> > goto out_unlock;
> > }
> >
> > + /*
> > + * Use @prev_cpu's sibling if it's idle.
> > + */
> > + if (sched_smt_active()) {
> > + for_each_cpu_and(cpu, cpu_smt_mask(prev_cpu), allowed) {
> > + if (cpu == prev_cpu)
> > + continue;
> > + if (scx_idle_test_and_clear_cpu(cpu))
> > + goto out_unlock;
> > + }
> > + }
> > +
> > /*
> > * Search for any idle CPU in the same LLC domain.
> > */
> > --
> > 2.53.0
> >
>
> Overall looks good, just a nit:
>
> The block comment at the top of scx_select_cpu_dfl() still lists 5
> steps. With this patch a new step should be added and the numbering
> updated accordingly.

Ah yes, good catch, we should update the comment as well. Will send a v2.

Thanks,
-Andrea