Re: Re: [PATCH] fuse: when copying a folio delay the mark dirty until the end
From: Joanne Koong
Date: Fri Mar 20 2026 - 13:24:51 EST
On Thu, Mar 19, 2026 at 1:32 AM Horst Birthelmer <horst@xxxxxxxxxxxxx> wrote:
>
> On Wed, Mar 18, 2026 at 06:32:25PM -0700, Joanne Koong wrote:
> > On Wed, Mar 18, 2026 at 2:52 PM Bernd Schubert <bernd@xxxxxxxxxxx> wrote:
> > >
> > > Hi Joanne,
> > >
> > > On 3/18/26 22:19, Joanne Koong wrote:
> > > > On Wed, Mar 18, 2026 at 7:03 AM Horst Birthelmer <horst@xxxxxxxxxxxxx> wrote:
> > > >>
> > > >> Hi Joanne,
> > > >>
> > > >> I wonder, would something like this help for large folios?
> > > >
> > > > Hi Horst,
> > > >
> > > > I don't think it's likely that the pages backing the userspace buffer
> > > > are large folios, so I think this may actually add extra overhead with
> > > > the extra folio_test_dirty() check.
> > > >
> > > > From what I've seen, the main cost that dwarfs everything else for
> > > > writes/reads is the actual IO, the context switches, and the memcpys.
> > > > I think compared to these things, the set_page_dirty_lock() cost is
> > > > negligible and pretty much undetectable.
> > >
> > >
> > > a little bit background here. We see in cpu flame graphs that the spin
> > > lock taken in unlock_request() and unlock_request() takes about the same
> > > amount of CPU time as the memcpy. Interestingly, only on Intel, but not
> > > AMD CPUs. Note that we are running with out custom page pinning, which
> > > just takes the pages from an array, so iov_iter_get_pages2() is not used.
> > >
> > > The reason for that unlock/lock is documented at the end of
> > > Documentation/filesystems/fuse/fuse.rst as Kamikaze file system. Well we
> > > don't have that, so for now these checks are modified in our branches to
> > > avoid the lock. Although that is not upstreamable. Right solution is
> > > here to extract an array of pages and do that unlock/lock per pagevec.
> > >
> > > Next in the flame graph is setting that set_page_dirty_lock which also
> > > takes as much CPU time as the memcpy. Again, Intel CPUs only.
> > > In the combination with the above pagevec method, I think right solution
> > > is to iterate over the pages, stores the last folio and then set to
> > > dirty once per folio.
> >
> > Thanks for the background context. The intel vs amd difference is
> > interesting. The approaches you mention sound reasonable. Are you able
> > to share the flame graph or is this easily repro-able using fio on the
> > passthrough_hp server?
> >
> >
> Hi Joanne,
>
> I have tried to reproduce this with passthrough_hp and I never saw it.
> So my answer would be something like: I don't think so.
>
> This happens even with large folios disabled. I was just trying to
> solve it, since I figured it will be worse with large folios.
Thanks for the context. I haven't encountered this bottleneck myself
(yet) but if you are encountering it pretty regularly, I agree with
you that it definitely seems worth addressing.
Thanks,
Joanne
>
> Thanks,
> Horst