Re: [REGRESSION] [PATCH] ceph: fix num_ops OBOE when crypto allocation fails
From: Viacheslav Dubeyko
Date: Mon Mar 16 2026 - 13:48:40 EST
On Sun, 2026-03-15 at 16:25 -0700, Sam Edwards wrote:
> move_dirty_folio_in_page_array() may fail if the file is encrypted, the
> dirty folio is not the first in the batch, and it fails to allocate a
> bounce buffer to hold the ciphertext. When that happens,
> ceph_process_folio_batch() simply redirties the folio and flushes the
> current batch -- it can retry that folio in a future batch.
>
How this issue can be reproduced? Do you have a reproduction script or anything
like this?
> However, if this failed folio is not contiguous with the last folio that
> did make it into the batch, then ceph_process_folio_batch() has already
> incremented `ceph_wbc->num_ops`; because it doesn't follow through and
> add the discontiguous folio to the array, ceph_submit_write() -- which
> expects that `ceph_wbc->num_ops` accurately reflects the number of
> contiguous ranges (and therefore the required number of "write extent"
> ops) in the writeback -- will panic the kernel:
>
> BUG_ON(ceph_wbc->op_idx + 1 != req->r_num_ops);
I don't quite follow. We decrement ceph_wbc->num_ops but BUG_ON() operates by
req->r_num_ops. How req->r_num_ops receives the value of ceph_wbc->num_ops?
>
> Fix this crash by decrementing `ceph_wbc->num_ops` back to the correct
> value when move_dirty_folio_in_page_array() fails, but the folio already
> started counting a new (i.e. still-empty) extent.
>
> The defect corrected by this patch has existed since 2022 (see first
> `Fixes:`), but another bug blocked multi-folio encrypted writeback until
> recently (see second `Fixes:`). The second commit made it into 6.18.16,
> 6.19.6, and 7.0-rc1, unmasking the panic in those versions. This patch
> therefore fixes a regression (panic) introduced by cac190c7674f.
>
> Cc: stable@xxxxxxxxxxxxxxx # v6.18+
> Fixes: d55207717ded ("ceph: add encryption support to writepage and writepages")
> Fixes: cac190c7674f ("ceph: fix write storm on fscrypted files")
> Signed-off-by: Sam Edwards <CFSworks@xxxxxxxxx>
> ---
> fs/ceph/addr.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> index e87b3bb94ee8..f366e159ffa6 100644
> --- a/fs/ceph/addr.c
> +++ b/fs/ceph/addr.c
> @@ -1366,6 +1366,10 @@ void ceph_process_folio_batch(struct address_space *mapping,
> rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
> folio);
> if (rc) {
> + /* Did we just begin a new contiguous op? Nevermind! */
> + if (ceph_wbc->len == 0)
> + ceph_wbc->num_ops--;
> +
> folio_redirty_for_writepage(wbc, folio);
> folio_unlock(folio);
> break;
We change ceph_wbc->num_ops, ceph_wbc->offset, and ceph_wbc->len here:
} else if (!is_folio_index_contiguous(ceph_wbc, folio)) {
if (is_num_ops_too_big(ceph_wbc)) {
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
break;
}
ceph_wbc->num_ops++;
ceph_wbc->offset = (u64)folio_pos(folio);
ceph_wbc->len = 0;
}
First of all, technically speaking, move_dirty_folio_in_page_array() can fail
even if is_folio_index_contiguous() is positive. Do you mean that we don't need
to decrement the ceph_wbc->num_ops in such case?
Secondly, do we need to correct ceph_wbc->offset?
Thanks,
Slava.