[REGRESSION] [PATCH v2] ceph: fix num_ops OBOE when crypto allocation fails

From: Sam Edwards

Date: Tue Mar 17 2026 - 22:38:22 EST


move_dirty_folio_in_page_array() may fail if the file is encrypted, the
dirty folio is not the first in the batch, and it fails to allocate a
bounce buffer to hold the ciphertext. When that happens,
ceph_process_folio_batch() simply redirties the folio and flushes the
current batch -- it can retry that folio in a future batch.

However, if this failed folio is not contiguous with the last folio that
did make it into the batch, then ceph_process_folio_batch() has already
incremented `ceph_wbc->num_ops`; because it doesn't follow through and
add the discontiguous folio to the array, ceph_submit_write() -- which
expects that `ceph_wbc->num_ops` accurately reflects the number of
contiguous ranges (and therefore the required number of "write extent"
ops) in the writeback -- will panic the kernel:

BUG_ON(ceph_wbc->op_idx + 1 != req->r_num_ops);

This issue can be reproduced on affected kernels by writing to
fscrypt-enabled CephFS file(s) with a 4KiB-written/4KiB-skipped/repeat
pattern (total filesize should not matter) and gradually increasing the
system's memory pressure until a bounce buffer allocation fails.

Fix this crash by decrementing `ceph_wbc->num_ops` back to the correct
value when move_dirty_folio_in_page_array() fails, but the folio already
started counting a new (i.e. still-empty) extent.

The defect corrected by this patch has existed since 2022 (see first
`Fixes:`), but another bug blocked multi-folio encrypted writeback until
recently (see second `Fixes:`). The second commit made it into 6.18.16,
6.19.6, and 7.0-rc1, unmasking the panic in those versions. This patch
therefore fixes a regression (panic) introduced by cac190c7674f.

Cc: stable@xxxxxxxxxxxxxxx # v6.18+
Fixes: d55207717ded ("ceph: add encryption support to writepage and writepages")
Fixes: cac190c7674f ("ceph: fix write storm on fscrypted files")
Signed-off-by: Sam Edwards <CFSworks@xxxxxxxxx>
---

Changes v1->v2:
- Added a paragraph to the commit log briefly explaining the I/O pattern to
reproduce the issue (thanks Slava)

- Additionally Cc'd regressions@xxxxxxxxxxxxxxx as required when handling
regressions

Feedback not addressed:
- "Commit message should link to the mentioned BUG_ON line in a source listing"
(link would not really help anyone, and the line is a moving target anyway)

- "Commit message should indicate that ceph_wbc->num_ops is passed to
ceph_osdc_new_request() to explain why ceph_wbc->num_ops == req->r_num_ops"
(ceph_wbc->num_ops is easy enough to search; and the cause->effect of the
BUG_ON() is secondary to the central point that ceph_process_folio_batch()
is responsible for ensuring ceph_wbc->num_ops is correct before returning)

- "An issue should be filed in the Ceph Redmine, linked via Closes:"
(thanks Ilya for clarifying this is unnecessary)

---
fs/ceph/addr.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index e87b3bb94ee8..f366e159ffa6 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1366,6 +1366,10 @@ void ceph_process_folio_batch(struct address_space *mapping,
rc = move_dirty_folio_in_page_array(mapping, wbc, ceph_wbc,
folio);
if (rc) {
+ /* Did we just begin a new contiguous op? Nevermind! */
+ if (ceph_wbc->len == 0)
+ ceph_wbc->num_ops--;
+
folio_redirty_for_writepage(wbc, folio);
folio_unlock(folio);
break;
--
2.52.0