Re: [PATCH] PCI: endpoint: pci-epf-test: Roll back BAR mapping when subrange setup fails
From: Niklas Cassel
Date: Wed Mar 18 2026 - 04:59:20 EST
On Wed, Mar 18, 2026 at 09:35:10AM +0100, Christian Bruel wrote:
>
>
> > > >
> > > > I think "modify TEST_F to SKIP if ENOSPC",
> > > > since it will solve the problem for all platforms that have few
> > > > inbound iATUs.
> > >
> > > That sounds like the right direction, though I think we would first
> > > need a way
> > > to propagate -ENOSPC back to the host side, instead of just
> > > collapsing all
> > > EP-side setup failures into STATUS_BAR_SUBRANGE_SETUP_FAIL, which
> > > pci_endpoint_test currently maps to -EIO.
> > >
> >
> > I think the epf test function can return several bits in the status
> > mask, in addition of STATUS_BAR_SUBRANGE_SETUP_FAIL. e.g
> > STATUS_BAR_SUBRANGE_SETUP_SKIP, or STATUS_BAR_SUBRANGE_SETUP_NOSPC.
> > I prefer the former since we want to pass the cause, not the action and
> > let the skip decision to the host
> >
>
> Rethinking this, having a pci_epc_feature to limit the maximum number of
> simultaneously allocatable BARs might be a useful addition to the EPC
> driver. The epc driver would need to keep track of the allocated BARs and
> check it before calling set_bar() for skipping or not.
The limitation is not allocatable BARs, it is the number of inbound/outbound
iATUs/windows.
(E.g. with inbound subrange mapping one BAR could be split in 3, and require
three different iATUs, but another BAR is not split, so just requires one
iATU.)
Right now, a big limitation in the PCI endpoint framework, is that there is
currently no API to see how many inbound/outbound iATUs that are currently
in use.
So the only thing you can do is to call mem_map() and see if you get an
error. This is a bit wasteful, as in some cases, you could probably
skip/wait with a lot of processing, if you knew that there was no free
iATUs windows available.
However, I think such API would be most useful for outbound mapping.
I.e. endpoint to host transfers. Think of e.g. nvmet-pci-epf, you can
easily have a queue depth of 128, so 128 outbound mappings at the same
time.
For inbound mappings, you can only ever map BARs, so in comparison
to outbound mappings, inbound mappings are very limited.
6 BARs, and sure, with inbound subrange mapping you can have a few
windows per BAR, but this is usually something the EPF driver does a
.bind() time, even before any transfers have taken place.
If set_bar() fails with -ENOSPC or something to indicate no free window,
I would imagine that that that is good enough for most EPF drivers.
>
> Do you think this added (minor) complexity is worth it compared to simply
> returning ENOSPC in the status?
TL;DL: I think a number of free windows API would be a good addition,
but for outbound windows.
For inbound windows, it seems a bit unnecessary IMO.
Kind regards,
Niklas