Re: [PATCH 0/4] sched/fair: SMT-aware asymmetric CPU capacity
From: Andrea Righi
Date: Fri Mar 27 2026 - 13:16:49 EST
On Fri, Mar 27, 2026 at 10:01:03PM +0530, Shrikanth Hegde wrote:
> Hi Andrea.
>
> On 3/26/26 8:32 PM, Andrea Righi wrote:
> > This series attempts to improve SD_ASYM_CPUCAPACITY scheduling by
> > introducing SMT awareness.
> >
> > = Problem =
> >
> > Nominal per-logical-CPU capacity can overstate usable compute when an SMT
> > sibling is busy, because the physical core doesn't deliver its full nominal
> > capacity. So, several SD_ASYM_CPUCAPACITY paths may pick high capacity CPUs
> > that are not actually good destinations.
> >
>
> How does energy model define the opp for SMT?
For now, as suggested by Vincent, we should probably ignore EAS / energy
model and keep it as it is (not compatible with SMT). I'll drop PATCH 3/4
and focus only at SD_ASYM_CPUCAPACITY + SMT.
>
> SMT systems have multiple of different functional blocks, a few ALU(arithmetic),
> LSU(load store unit) etc. If same/similar workload runs on sibling, it would affect the
> performance, but sibling is using different functional blocks, then it would
> not.
>
> So underlying actual CPU Capacity of each thread depends on what each sibling is running.
> I don't understand how does the firmware/energy models define this.
They don't and they probably shouldn't. I don't think it's possible to
model CPU capacity with a static nominal value when SMT is enabled, since
the effective capacity changes if the corresponding sibling is busy or not.
It should be up to the scheduler to figure out a reasonable way to estimate
the actual capacity, considering the status of the other sibling (e.g.,
prioritizing the fully-idle SMT cores over the partially-idle SMT cores,
like we do in other parts of the scheduler code).
>
> > = Proposed Solution =
> >
> > This patch set aligns those paths with a simple rule already used
> > elsewhere: when SMT is active, prefer fully idle cores and avoid treating
> > partially idle SMT siblings as full-capacity targets where that would
> > mislead load balance.
> >
> > Patch set summary:
> >
> > - [PATCH 1/4] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
> >
> > Prefer fully-idle SMT cores in asym-capacity idle selection. In the
> > wakeup fast path, extend select_idle_capacity() / asym_fits_cpu() so
> > idle selection can prefer CPUs on fully idle cores, with a safe fallback.
> >
> > - [PATCH 2/4] sched/fair: Reject misfit pulls onto busy SMT siblings on asym-capacity
> >
> > Reject misfit pulls onto busy SMT siblings on SD_ASYM_CPUCAPACITY.
> > Provided for consistency with PATCH 1/4.
> >
> > - [PATCH 3/4] sched/fair: Enable EAS with SMT on SD_ASYM_CPUCAPACITY systems
> >
> > Enable EAS with SD_ASYM_CPUCAPACITY and SMT. Also provided for
> > consistency with PATCH 1/4. I've also tested with/without
> > /proc/sys/kernel/sched_energy_aware enabled (same platform) and haven't
> > noticed any regression.
> >
> > - [PATCH 4/4] sched/fair: Prefer fully-idle SMT core for NOHZ idle load balancer
> >
> > When choosing the housekeeping CPU that runs the idle load balancer,
> > prefer an idle CPU on a fully idle core so migrated work lands where
> > effective capacity is available.
> >
> > The change is still consistent with the same "avoid CPUs with busy
> > sibling" logic and it shows some benefits on Vera, but could have
> > negative impact on other systems, I'm including it for completeness
> > (feedback is appreciated).
> >
> > This patch set has been tested on the new NVIDIA Vera Rubin platform, where
> > SMT is enabled and the firmware exposes small frequency variations (+/-~5%)
> > as differences in CPU capacity, resulting in SD_ASYM_CPUCAPACITY being set.
> >
>
> I assume the CPU_CAPACITY values fixed?
> first sibling has max, while other has less?
The firmware is exposing the same capacity for both siblings. SMT cores may
have different capacity, but siblings within the same SMT core have the
same capacity.
There was an idea to expose a higher capacity for all the 1st sibling and
a lower capacity for all the 2nd siblings, but I don't think it's a good
idea, since that would just confuse the scheduler (and the 2nd sibling
doesn't really have a lower nominal capacity if it's running alone).
>
> > Without these patches, performance can drop up to ~2x with CPU-intensive
> > workloads, because the SD_ASYM_CPUCAPACITY idle selection policy does not
> > account for busy SMT siblings.
> >
>
> How is the performance measured here? Which benchmark?
I've used an internal NVIDIA suite (based on NVBLAS), I also tried Linpack
and got similar results. I'm planning to repeat the tests using public
benchmarks and share the results as soon as I can.
> By any chance you are running number_running_task <= (nr_cpus / smt_threads_per_core),
> so it is all fitting nicely?
That's the case that gives me the optimal results.
>
> If you increase those numbers, how does the performance numbers compare?
I tried different number of tasks. The more I approach system saturation
the smaller the benefits are. When I completely saturate the system I don't
see any benefit with this changes, neither regressions, but I guess that's
expected.
>
> Also, whats the system is like? SMT level?
2 siblings for each SMT core.
Thanks,
-Andrea