Re: [PATCH] cpuidle: Deny idle entry when CPU already have IPI interrupt pending
From: Daniel Lezcano
Date: Mon Mar 16 2026 - 05:33:00 EST
On 3/16/26 09:55, Christian Loehle wrote:
On 3/16/26 07:37, Maulik Shah wrote:
CPU can get IPI interrupt from another CPU while it is executing
cpuidle_select() or about to execute same. The selection do not account
for pending interrupts and may continue to enter selected idle state only
to exit immediately.
Example trace collected when there is cross CPU IPI.
[000] 154.892148: sched_waking: comm=sugov:4 pid=491 prio=-1 target_cpu=007
[000] 154.892148: ipi_raise: target_mask=00000000,00000080 (Function call interrupts)
[007] 154.892162: cpu_idle: state=2 cpu_id=7
[007] 154.892208: cpu_idle: state=4294967295 cpu_id=7
[007] 154.892211: irq_handler_entry: irq=2 name=IPI
[007] 154.892211: ipi_entry: (Function call interrupts)
[007] 154.892213: sched_wakeup: comm=sugov:4 pid=491 prio=-1 target_cpu=007
[007] 154.892214: ipi_exit: (Function call interrupts)
This impacts performance and the above count increments.
commit ccde6525183c ("smp: Introduce a helper function to check for pending
IPIs") already introduced a helper function to check the pending IPIs and
it is used in pmdomain governor to deny the cluster level idle state when
there is a pending IPI on any of cluster CPUs.
This however does not stop CPU to enter CPU level idle state. Make use of
same at CPUidle to deny the idle entry when there is already IPI pending.
With change observing glmark2 [1] off screen scores improving in the range
of 25% to 30% on Qualcomm lemans-evk board which is arm64 based having two
clusters each with 4 CPUs.
[1] https://github.com/glmark2/glmark2
Signed-off-by: Maulik Shah <maulik.shah@xxxxxxxxxxxxxxxx>
---
drivers/cpuidle/cpuidle.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index c7876e9e024f9076663063ad21cfc69343fdbbe7..c88c0cbf910d6c2c09697e6a3ac78c081868c2ad 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -224,6 +224,9 @@ noinstr int cpuidle_enter_state(struct cpuidle_device *dev,
bool broadcast = !!(target_state->flags & CPUIDLE_FLAG_TIMER_STOP);
ktime_t time_start, time_end;
+ if (cpus_peek_for_pending_ipi(drv->cpumask))
+ return -EBUSY;
+
instrumentation_begin();
/*
---
base-commit: b84a0ebe421ca56995ff78b66307667b62b3a900
change-id: 20260316-cpuidle_ipi-4c64036f9a48
Best regards,
So we already do a per-CPU IPI need_resched() check in the idle path.
The need_resched() is not the same check. Here the interrupts are off, the test check if there is a pending IPI before entering the sleep routine which will in any case abort because of it. This check saves the costs related to preparing entering the idle state, the call to the firmware and the rollback. Those add an overhead in terms of latency and energy for nothing. As stated in the description, this ultimate check before going idle was introduced also for the cluster idle state and showed a significant improvement [1].
[1] https://lore.kernel.org/all/20251105095415.17269-1-ulf.hansson@xxxxxxxxxx/
Your patch uses drv->cpumask, which will contain all CPUs, preventing idle entry if
any CPU has an IPI pending?