Re: [PATCH v2 3/7] x86/sev: add support for RMPOPT instruction

From: Kalra, Ashish

Date: Wed Mar 25 2026 - 22:15:09 EST



On 3/25/2026 9:02 PM, Kalra, Ashish wrote:
>
> On 3/25/2026 7:40 PM, Andrew Cooper wrote:
>> On 25/03/2026 9:53 pm, Kalra, Ashish wrote:
>>> On 3/4/2026 9:56 AM, Andrew Cooper wrote:
>>>> It should be:
>>>>
>>>> static inline bool __rmpopt(unsigned long addr, unsigned int fn)
>>>> {
>>>>     bool res;
>>>>
>>>>     asm volatile (".byte 0xf2, 0x0f, 0x01, 0xfc"
>>>>                  : "=ccc" (res)
>>>>                  : "a" (addr), "c" (fn));
>>>>
>>>>     return res;
>>>> }
>>>>
>>> The above constraints to use on_each_cpu_mask() is forcing the use of:
>>>
>>> void rmpopt(void *val)
>>
>> No.  You don't break your thin wrapper in order to force it into a
>> wrong-shaped hole.
>>
>> You need something like this:
>>
>> void do_rmpopt_optimise(void *val)
>> {
>>     unsigned long addr = *(unsigned long *)val;
>>
>>     WARN_ON_ONCE(__rmpopt(addr, OPTIMISE));
>> }
>>
>> to invoke the wrapper safely from the IPI.  That will at obvious when
>> something wrong occurs.
>
> This wrapper i can/will use, but doing a WARN_ON_ONCE() is probably avoidable as
> there will be ranges where RMPOPT will always fail, such as while checking
> the RMP table entries itself, so there is a good chance that we will always trigger
> the WARN_ON_ONCE() on the memory range containing the RMP table.
>

To add, the above is in context of the current implementation, where we scan all
memory up-to 2TB for applying RMP optimizations when SNP is enabled (and/or SNP_INIT).

We will *always* get this stack trace during booting, so i think it makes sense
to avoid this WARN_ON_ONCE().

Thanks,
Ashish