Re: [LSF/MM/BPF TOPIC][RESEND] Status of Rust in the block subsystem
From: Andreas Hindborg
Date: Tue Mar 17 2026 - 04:32:50 EST
"Greg KH" <gregkh@xxxxxxxxxxxxxxxxxxx> writes:
> On Tue, Mar 17, 2026 at 09:07:10AM +0100, Andreas Hindborg wrote:
>> "Hannes Reinecke" <hare@xxxxxxx> writes:
>>
>> > On 3/17/26 00:51, Keith Busch wrote:
>> >> On Wed, Mar 11, 2026 at 02:21:00PM +0100, Andreas Hindborg wrote:
>> >>> As this topic was not selected for discussion at LSF, and I did not
>> >>> receive an invitation for LSF this year, I propose that we discuss these
>> >>> two topics on list.
>> >>>
>> >>> I do believe that these topics need to be discussed, and I would very
>> >>> much appreciate your input.
>> >>
>> >> I can sympathise the difficulty of maintaining external modules.
>> >>
>> >> In terms of this being a reference driver, that implies some future
>> >> hardware driver may leverage this for its development. Is there anything
>> >> in mind at this point for production? If so, maybe that use case should
>> >> take the lead. But either way, I think rust-nvme upstream inclusion
>> >> would invite confusion. Once it's upstream, it's no longer a reference
>> >> when distros and users turn it on.
>> >>
>> > I wholeheartedly agree.
>> >
>> > While I do see the original appeal to have a rust-nvme driver, having
>> > one will just lead to confusion on all sides, especially for distros.
>> > (Why is it there? should it be preferred to the original one? Do we
>> > have to support both of them? Are there features missing in either
>> > of these drivers?)
>> > In general we are trying hard to avoid duplication in the linux kernel,
>> > especially on the driver side. We constantly have to fight^Wargue
>> > with driver vendors why duplicating existing drivers to support new
>> > hardware is a bad idea, so we really should not start now just because
>> > the driver is written in another language.
>> > (That really might be giving vendors bad ideas :-)
>>
>> I actually agree to some extent. But I do think we can get around most
>> confusion with loud and clear documentation. We could make the driver
>> not probe by default, requiring a configfs setting to probe. Or leave
>> the PCI identifier table empty, so patching the driver is required to
>> make it probe.
>>
>> For me, the big benefit would be having the rust nvme driver as part of
>> an allmodconfig or allyesconfig. That would prevent a ton of trouble.
>>
>> We do plan to utilize the block infrastructure, but I think we are still
>> quite a long way from sending anything. Keeping the rust nvme driver in
>> tree till that point would prevent pci, dma, irq, etc. from developing
>> in ways that would not support a block device use case. As an example,
>> upstream Rust irq APIs are not actually able to support NVMe at the
>> moment. They work fine for GPU drivers though. And I cannot go an fix
>> them without a user. Same for DMA pool.
>>
>> I could go and find some other piece of unsupported PCI hardware and
>> write a driver for that, use that to keep the APIs in shape upstream.
>> It's just a lot more work and the NVMe driver is already here and 90%
>> ready.
>
> This implies that there really is no "need" for these rust bindings at
> all, if you don't know of, or are planning for, any real driver for
> them. So why have them at all?
>
> For the PCI and driver core bindings, and the majority of the other ones
> merged, we have real users (binder, nova-core, etc.) and so we are
> willing to take them and keep them up to date. For these block
> bindings, why is it even worth it to have them around if there's never
> going to be a real user?
I'm just going to quote myself in case you missed these few sentences:
We do plan to utilize the block infrastructure, but I think we are still
quite a long way from sending anything.
<cut>
My fear is that by then, I have to patch a number of GPU drivers in the
process.
Best regards,
Andreas Hindborg