[PATCH 0/9] s390: Improve this_cpu operations

From: Heiko Carstens

Date: Tue Mar 17 2026 - 15:59:52 EST


This is a follow-up to Peter Zijlstra's in-kernel rseq RFC [1].

With the intended removal of PREEMPT_NONE this_cpu operations based on
atomic instructions, guarded with preempt_disable()/preempt_enable() pairs,
become more expensive: the preempt_disable() / preempt_enable() pairs are
not optimized away anymore during compile time.

In particular the conditional call to preempt_schedule_notrace() after
preempt_enable() adds additional code and register pressure.

To avoid this Peter suggested an in-kernel rseq approach. While this would
certainly work, this series tries to come up with a solution which uses
less instructions and doesn't require to repeat instruction sequences.

The idea is that this_cpu operations based on atomic instructions are
guarded with mvyi instructions:

- The first mvyi instruction writes the register number, which contains
the percpu address variable to lowcore. This also indicates that a
percpu code section is executed.

- The first instruction following the mvyi instruction must be the ag
instruction which adds the percpu offset to the percpu address register.

- Afterwards the atomic percpu operation follows.

- Then a second mvyi instruction writes a zero to lowcore, which indicates
the end of the percpu code section.

- In case of an interrupt/exception/nmi the register number which was
written to lowcore is copied to the exception frame (pt_regs), and a zero
is written to lowcore.

- On return to the previous context it is checked if a percpu code section
was executed (saved register number not zero), and if the process was
migrated to a different cpu. If the percpu offset was already added to
the percpu address register (instruction address does _not_ point to the
ag instruction) the content of the percpu address register is adjusted so
it points to percpu variable of the new cpu.

All of this seems to work, but of course it could still be broken since I
missed some detail.

In total this series results in a kernel text size reduction of ~106kb. The
number of preempt_schedule_notrace() call sites is reduced from 7089 to
1577.

Note: this comes without any huge performance analysis, however all
microbenchmarks confirmed that the new code is at least as fast as the
old code, like expected.

[1] 20260223163843.GR1282955@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Heiko Carstens (9):
s390/percpu: Provide arch_raw_cpu_ptr()
s390/alternatives: Add new ALT_TYPE_PERCPU type
s390/percpu: Infrastructure for more efficient this_cpu operations
s390/percpu: Use new percpu code section for arch_this_cpu_add()
s390/percpu: Use new percpu code section for arch_this_cpu_add_return()
s390/percpu: Use new percpu code section for arch_this_cpu_[and|or]()
s390/percpu: Provide arch_this_cpu_read() implementation
s390/percpu: Provide arch_this_cpu_write() implementation
s390/percpu: Remove one and two byte this_cpu operation implementation

arch/s390/boot/alternative.c | 7 +
arch/s390/include/asm/alternative.h | 5 +
arch/s390/include/asm/entry-percpu.h | 54 ++++++
arch/s390/include/asm/lowcore.h | 3 +-
arch/s390/include/asm/percpu.h | 259 ++++++++++++++++++++++-----
arch/s390/include/asm/ptrace.h | 2 +
arch/s390/kernel/alternative.c | 25 ++-
arch/s390/kernel/irq.c | 5 +
arch/s390/kernel/nmi.c | 3 +
arch/s390/kernel/traps.c | 3 +
10 files changed, 319 insertions(+), 47 deletions(-)
create mode 100644 arch/s390/include/asm/entry-percpu.h

base-commit: f338e77383789c0cae23ca3d48adcc5e9e137e3c
--
2.51.0