Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
From: Kiryl Shutsemau
Date: Mon Mar 16 2026 - 08:19:01 EST
On Thu, Mar 12, 2026 at 11:00:09PM +1300, Kai Huang wrote:
> TDX can leave the cache in an incoherent state for the memory it uses.
> During kexec the kernel does a WBINVD for each CPU before memory gets
> reused in the second kernel.
>
> There were two considerations for where this WBINVD should happen. In
> order to handle cases where the cache might get into an incoherent state
> while the kexec is in the initial stages, it is needed to do this later
> in the kexec path, when the kexecing CPU stops all remote CPUs. However,
> the later kexec process is sensitive to existing races. So to avoid
> perturbing that operation, it is better to do it earlier.
>
> The existing solution is to track the need for the kexec time WBINVD
> generically (i.e., not just for TDX) in a per-cpu var. The late
> invocation only happens if the earlier TDX specific logic in
> tdx_cpu_flush_cache_for_kexec() didn’t take care of the work. This
> earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
> handler, which is called earlier in the kexec path.
>
> However, this accidentally added it to KVM’s unload path as well (also
> the "error path" when bringing up TDX during KVM module load), which
> uses the same internal functions. This makes some sense too, though,
> because if KVM is getting unloaded, TDX cache affecting operations will
> likely cease. So it is a good point to do the work before KVM is
> unloaded and won't have a chance to handle the shutdown operation in the
> future.
>
> Unfortunately this KVM unload invocation triggers a lockdep warning in
> tdx_cpu_flush_cache_for_kexec():
>
> IS_ENABLED(CONFIG_PREEMPT_COUNT) && __lockdep_enabled && (preempt_count() == 0 && this_cpu_read(hardirqs_enabled))
> WARNING: arch/x86/virt/vmx/tdx/tdx.c:1875 at tdx_cpu_flush_cache_for_kexec+0x36/0x60, CPU#0: cpuhp/0/22
> ...
> Call Trace:
> <TASK>
> vt_disable_virtualization_cpu+0x1c/0x30 [kvm_intel]
> kvm_arch_disable_virtualization_cpu+0x12/0x80 [kvm]
> kvm_offline_cpu+0x24/0x40 [kvm]
> cpuhp_invoke_callback+0x1b0/0x740
> ...
>
> Since tdx_cpu_flush_cache_for_kexec() is doing WBINVD on a specific CPU,
> it has an assert for preemption being disabled. This works fine for the
> kexec time invocation, but the KVM unload path calls this as part of a
> CPUHP callback for which, despite always executing on the target CPU,
> preemption is not disabled.
>
> It might be better to add the earlier invocation logic to a dedicated
> arch/x86 TDX syscore shutdown() handler, but to make the fix more
> backport friendly just adjust the lockdep assert in the
> tdx_cpu_flush_cache_for_kexec().
>
> The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
> the same CPU. It's OK that it can be preempted in the middle as long as
> it won't be rescheduled to another CPU.
>
> Remove the too strong lockdep_assert_preemption_disabled(), and change
> this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the
> more proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
> conditions that the context cannot be moved to another CPU to run in the
> middle.
>
> Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
> Cc: stable@xxxxxxxxxxxxxxx
> Reported-by: Vishal Verma <vishal.l.verma@xxxxxxxxx>
> Tested-by: Vishal Verma <vishal.l.verma@xxxxxxxxx>
> Acked-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> Reviewed-by: Nikolay Borisov <nik.borisov@xxxxxxxx>
> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx>
> Signed-off-by: Kai Huang <kai.huang@xxxxxxxxx>
Acked-by: Kiryl Shutsemau (Meta) <kas@xxxxxxxxxx>
--
Kiryl Shutsemau / Kirill A. Shutemov