Re: [RFC PATCHv2 1/1] idpf: Fix header clobber in IDPF with SWIOTLB enabled

From: Steve Rutherford

Date: Tue Mar 24 2026 - 20:51:23 EST


On Mon, Mar 23, 2026 at 6:33 AM Alexander Lobakin
<aleksander.lobakin@xxxxxxxxx> wrote:
>
> From: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
> Date: Thu, 12 Mar 2026 17:30:24 +0100
>
> > Hey,
> >
> > From: Steve Rutherford via Intel-wired-lan <intel-wired-lan@xxxxxxxxxx>
> > Date: Fri, 6 Mar 2026 11:35:27 -0800
> >
> >> On Fri, Mar 6, 2026 at 6:52=E2=80=AFAM Alexander Lobakin
> >> <aleksander.lobakin@xxxxxxxxx> wrote:
> >>>
> >>> From: Steve Rutherford <srutherford@xxxxxxxxxx>
> >>> Date: Wed, 4 Mar 2026 14:01:46 -0800
> >>>
> >>>> I believe syncing twice isn't inherently wrong - it's more that you
> >>>> can't synthesize the header via the workaround and then sync, since it
> >>>> will pull the uninitialized header buffer from the SWIOTLB. Outside of
> >>>> SWIOTLB, dma syncs are more or less no-ops, while (with SWIOTLB) they
> >>>> are copies from/to the bounce buffers.
> >>>
> >>> Ah I see.
> >>>
> >>> What if I add sync_for_device after copying the header? This should
> >>> synchronize the bounce buffer with the copied data I guess? A bit of
> >>> overhead, but this W/A triggers mostly on stuff like ARP/ICMP, "hotpath"
> >>> L4 protos are fortunately not affected.
> >>
> >> That should work fine as well. I'm not certain I have strong
> >> preferences on the right answer here, other than "does it work and,
> >> ideally, is it less confusing?" The patch I posted is a bit
> >> unintuitive. I think what you are describing might make the workaround
> >> self-contained.
> >
> > Could you please test this patch with SWIOTLB? If it doesn't fix
> > the issue, you can try changing `page_pool_get_dma_dir(hdr_pp)`
> > to `DMA_TO_DEVICE` and/or `DMA_BIDIRECTIONAL`.
> > Currently, I don't have any machines with SWIOTLB unfortunately =\
> > Let me know if any of these works. I'll submit it properly when we
> > have a solution.
>
> Any updates? I need your Tested-by in order to send this.

Sorry for the delay, tried to reproduce this against a 6.18 kernel and
ran into environment-specific issues with 6.18. I'll take another stab
sometime this week.

thanks,
Steve
>
> >
> > (the patch applies cleanly to the latest net-next and should apply
> > to a couple older kernel releases as well)
> >
> >>
> >> thanks,
> >> Steve
> >> [And sorry for my gmail-driven top posting crimes D: ]
> >
> > Thanks,
> > Olek
> > ---
> > diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > index 45ee5b80479a..42111d56d66f 100644
> > --- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > +++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
> > @@ -3475,7 +3475,8 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
> > struct libeth_fqe *buf, u32 data_len)
> > {
> > u32 copy = data_len <= L1_CACHE_BYTES ? data_len : ETH_HLEN;
> > - struct page *hdr_page, *buf_page;
> > + const struct page_pool *hdr_pp;
> > + dma_addr_t hdr_addr;
> > const void *src;
> > void *dst;
> >
> > @@ -3483,16 +3484,20 @@ static u32 idpf_rx_hsplit_wa(const struct libeth_fqe *hdr,
> > !libeth_rx_sync_for_cpu(buf, copy))
> > return 0;
> >
> > - hdr_page = __netmem_to_page(hdr->netmem);
> > - buf_page = __netmem_to_page(buf->netmem);
> > - dst = page_address(hdr_page) + hdr->offset +
> > - pp_page_to_nmdesc(hdr_page)->pp->p.offset;
> > - src = page_address(buf_page) + buf->offset +
> > - pp_page_to_nmdesc(buf_page)->pp->p.offset;
> > + hdr_pp = __netmem_get_pp(hdr->netmem);
> > + dst = __netmem_address(hdr->netmem) + hdr->offset + hdr_pp->p.offset;
> > + src = __netmem_address(buf->netmem) + buf->offset +
> > + __netmem_get_pp(buf->netmem)->p.offset;
> >
> > memcpy(dst, src, LARGEST_ALIGN(copy));
> > buf->offset += copy;
> >
> > + /* Make sure SWIOTLB is synced */
> > + hdr_addr = page_pool_get_dma_addr_netmem(hdr->netmem);
> > + dma_sync_single_range_for_device(hdr_pp->p.dev, hdr_addr,
> > + hdr->offset + hdr_pp->p.offset,
> > + copy, page_pool_get_dma_dir(hdr_pp));
> > +
> > return copy;
> > }
>
> Thanks,
> Olek