Re: [PATCH v9 2/4] ring-buffer: Flush and stop persistent ring buffer on panic
From: Mathieu Desnoyers
Date: Wed Mar 18 2026 - 11:01:59 EST
On 2026-03-18 10:19, Masami Hiramatsu (Google) wrote:
On Wed, 11 Mar 2026 10:32:29 +0900
"Masami Hiramatsu (Google)" <mhiramat@xxxxxxxxxx> wrote:
From: Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>
On real hardware, panic and machine reboot may not flush hardware cache
to memory. This means the persistent ring buffer, which relies on a
coherent state of memory, may not have its events written to the buffer
and they may be lost. Moreover, there may be inconsistency with the
counters which are used for validation of the integrity of the
persistent ring buffer which may cause all data to be discarded.
To avoid this issue, stop recording of the ring buffer on panic and
flush the cache of the ring buffer's memory.
Hmm, on some architectures, flush_cache_vmap() is implemented using
on_each_cpu() which waits IPI. But that does not safe in panic notifier
because it is called after smp_send_stop().
Since this cache flush issue is currently only confirmed on arm64,
I would like to make it doing nothing (do { } while (0)) by default.
FWIW, I've sent a related series a while ago about flushing pmem
areas to memory on panic:
https://lore.kernel.org/lkml/20240618154157.334602-3-mathieu.desnoyers@xxxxxxxxxxxx/
When reading your patch, I feel like I'm missing something, so please bear with
me for a few questions:
- What exactly are you trying to flush ? By "flush" do you mean
evince cache lines or write back cache lines ? (I expect you aim
at the second option)
- AFAIU, you are not trying to evince cache lines after creation
of a new virtual mapping (which is the documented intent of
flush_cache_vmap).
- AFAIU flush_cache_vmap maps to no-code on arm64 (asm-generic), what am
I missing ? It makes sense to be a no-op because AFAIR arm64 does not
have to deal with virtually aliasing caches.
see commit 8690bbcf3b7 ("Introduce cpu_dcache_is_aliasing() across all architectures")
The arch_wb_cache_pmem is specific to pmem, which is not exactly what you want
to use, but on arm64 it's implemented as:
/* Ensure order against any prior non-cacheable writes */
dmb(osh);
dcache_clean_pop((unsigned long)addr, (unsigned long)addr + size);
Which I think has the writeback semantic you are looking for, and AFAIU should no
require IPIs (at least on arm64) to flush cache lines across the entire system.
Cheers,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com