Re: [PATCH 00/17] Paravirt CPUs and push task for less vCPU preemption

From: Shrikanth Hegde

Date: Thu Mar 26 2026 - 02:12:53 EST




On 11/19/25 6:14 PM, Shrikanth Hegde wrote:
Detailed problem statement and some of the implementation choices were
discussed earlier[1].

[1]: https://lore.kernel.org/all/20250910174210.1969750-1-sshegde@xxxxxxxxxxxxx/

This is likely the version which would be used for LPC2025 discussion on
this topic. Feel free to provide your suggestion and hoping for a solution
that works for different architectures and it's use cases.

All the existing alternatives such as cpu hotplug, creating isolated
partitions etc break the user affinity. Since number of CPUs to use change
depending on the steal time, it is not driven by User. Hence it would be
wrong to break the affinity. This series allows if the task is pinned
only paravirt CPUs, it will continue running there.

Changes compared v3[1]:

- Introduced computation of steal time in powerpc code.
- Derive number of CPUs to use and mark the remaining as paravirt based
on steal values.
- Provide debugfs knobs to alter how steal time values being used.
- Removed static key check for paravirt CPUs (Yury)
- Removed preempt_disable/enable while calling stopper (Prateek)
- Made select_idle_sibling and friends aware of paravirt CPUs.
- Removed 3 unused schedstat fields and introduced 2 related to paravirt
handling.
- Handled nohz_full case by enabling tick on it when there is CFS/RT on
it.
- Updated helper patch to override arch behaviour for easier debugging
during development.
- Kept

Changes compared to v4[2]:
- Last two patches were sent out separate instead of being with series.
That created confusion. Those two patches are debug patches one can
make use to check functionality across acrhitectures. Sorry about
that.
- Use DEVICE_ATTR_RW instead (greg)
- Made it as PATCH since arch specific handling completes the
functionality.

[2]: https://lore.kernel.org/all/20251119062100.1112520-1-sshegde@xxxxxxxxxxxxx/

TODO:

- Get performance numbers on PowerPC, x86 and S390. Hopefully by next
week. Didn't want to hold the series till then.

- The CPUs to mark as paravirt is very simple and doesn't work when
vCPUs aren't spread out uniformly across NUMA nodes. Ideal would be splice
the numbers based on how many CPUs each NUMA node has. It is quite
tricky to do specially since cpumask can be on stack too. Given
NR_CPUS can be 8192 and nr_possible_nodes 32. Haven't got my head into
solving it yet. Maybe there is easier way.

- DLPAR Add/Remove needs to call init of EC/VP cores (powerpc specific)

- Userspace tools awareness such as irqbalance.

- Delve into design of hint from Hyeprvisor(HW Hint). i.e Host informs
guest which/how many CPUs it has to use at this moment. This interface
should work across archs with each arch doing its specific handling.

- Determine the default values for steal time related knobs
empirically and document them.

- Need to check safety against CPU hotplug specially in process_steal.


Applies cleanly on tip/master:
commit c2ef745151b21d4dcc4b29a1eabf1096f5ba544b


Thanks to srikar for providing the initial code around powerpc steal
time handling code. Thanks to all who went through and provided reviews.

PS: I haven't found a better name. Please suggest if you have any.


Sorry for the long delay in coming with next steps. Largely it was due to me
not have worked on it, partially due to lack of system being available.

I have been wondering how to proceed for next version. Your comments are highly
appreciated.

- One of the idea vincent's suggested was to use CPU Capacity.
I made poc[1] around it and it works, But it doesn't seems to efficient
for me. The reason being,
- in sched_balance_rq it would be better to not to spread load into a
CPU marked as paravirt as sched_tick would trying the same thing, specially
for active_balance.
- Would need a notion of which CPUs are marked as not be used. Computing them
in sched_balance_rq is going to be costly.
- So, if we are going to need a cpumask which maintains that state. If have that cpumask
already, CPU CAPACITY need not be changed. There will be separation between the two.
So that they won't fight with each other IMO. Feel free to correct me.

- I have been thinking, the steal time is generic property across archs. why have the
arch specific handling, when it could be in generic code. I know CPU numbers could be
tricky, but how about having steal time handling governers. Default governer would take
out last set of cores. (I still need to figure out splicing numa across NUMA nodes)
i.e in sched_tick, we periodically call a schedule_work to handle the steal time, if steal time
is greater than configurable threshold step up/down approach can be taken. (same as current
powerpc logic)

- Regarding the cpuset way, it would still need sched domain rebuilds, on large systems
i think it is still expensive to do. Though steal time changes are not that frequent,
it would be better if the infra is lightweight. Also, different cgroup version are there,
I don't know how to fit into all those cases.

- I went through the cover-letter of "Semantics-aware vCPU scheduling for oversubscribed KVM",
- My take is this would help reduce the context of lock holder preemption as it aims to
reduce the steal time by stacking the tasks on lesser set of CPUs. Once the lock holder runs,
it would disable preemption and run to completion.

- Debug some of the cases discussed at LPC. schbench regression was gone after
modifying it. Hackbench had in some cases regressions. Setting up the systems to do.
Let me see if i can re-create that in powerpc

- I still need to figure out irq related stuff. How to force or migrate irq from
CPU's marked as paravirt. irqbalance is one thing, but how to do so when irqbalance
is not running.

- How about the name as "usable" CPUs. ??


[1]: https://lore.kernel.org/all/b8d6d83c-00d8-4b66-8470-62cc528e1d6b@xxxxxxxxxxxxx/
[2]: https://lore.kernel.org/all/20251219035334.39790-1-kernellwp@xxxxxxxxx/