Re: [RFC PATCH] futex: Introduce __vdso_robust_futex_unlock

From: Thomas Gleixner

Date: Mon Mar 16 2026 - 18:20:09 EST


On Mon, Mar 16 2026 at 17:01, Mathieu Desnoyers wrote:
> On 2026-03-16 16:27, Thomas Gleixner wrote:
>> On Mon, Mar 16 2026 at 15:36, Mathieu Desnoyers wrote:
>>> On 2026-03-16 13:12, Thomas Gleixner wrote:
>>>> sys_exit() is different because there a task voluntarily exits and if
>>>> it does so between the unlock and the clearing of the op pointer,
>>>> then so be it. That'd be wilfull ignorance or malice and not any
>>>> different from the task doing the corruption itself in user space
>>>> right away.
>>>
>>> I'm not sure about this one. How about the two following scenario:
>>> A concurrent thread calls sys_exit concurrently with the vdso. Is this
>>> something we should handle or consider it "wilfull ignorance/malice" ?
>>
>> I don't understand your question. What has the exit to do with the VDSO?
>
> You mentioned that "if a task exits between unlock and clearing of the op
> pointer, then so be it".
>
> But that exit could be issued by another thread, not necessarily by the
> thread doing the unlock + pointer clear.
>
> But I understand that your series takes care of this by:
>
> - clearing the op pointer within the futex syscall,
> - tracking the insn range and ZF state within the vDSO.
>
> I'm fine with your approach, I was just not sure about your comment
> about it being "different" for sys_exit.

What I clearly described is the sequence:

set_pointer();
unlock();
sys_exit();

The kernel does not care about that at all as that's what user space
asked for. That is clearly in the category of "I want to shoot myself
into the foot".

The only case where the kernel has to provide help to user space is the
involuntary exit caused by a crash or external signal between unlock()
and clear_pointer(). Simply because there is no way that user space can
solve that problem on its own.

If you want to prevent user space from shooting itself into the foot
then the above crude scenario is the least of your problems.

Thanks,

tglx