Re: [PATCH V1] accel/amdxdna: Support read-only user-pointer BO mappings

From: Mario Limonciello

Date: Tue Mar 31 2026 - 12:49:31 EST




On 3/31/26 11:29, Lizhi Hou wrote:

On 3/31/26 09:13, Mario Limonciello wrote:


On 3/26/26 11:27, Lizhi Hou wrote:
From: Max Zhen <max.zhen@xxxxxxx>

Update the amdxdna user-pointer (ubuf) BO path to support creating buffer
objects from read-only user mappings.

Detect read-only VMAs by checking VMA permissions across all user virtual
address ranges associated with the BO. When all entries are read- only, pin
user pages without FOLL_WRITE and export the resulting dmabuf as read-only
(O_RDONLY).

This allows userptr BOs backed by read-only mappings to be safely imported
and used without requiring write access, which was previously rejected due
to unconditional FOLL_WRITE usage.

Signed-off-by: Max Zhen <max.zhen@xxxxxxx>
Signed-off-by: Lizhi Hou <lizhi.hou@xxxxxxx>
---
  drivers/accel/amdxdna/amdxdna_ubuf.c | 30 ++++++++++++++++++++++++++--
  1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/drivers/accel/amdxdna/amdxdna_ubuf.c b/drivers/accel/ amdxdna/amdxdna_ubuf.c
index 4c0647057759..1a0e2a274170 100644
--- a/drivers/accel/amdxdna/amdxdna_ubuf.c
+++ b/drivers/accel/amdxdna/amdxdna_ubuf.c
@@ -125,6 +125,27 @@ static const struct dma_buf_ops amdxdna_ubuf_dmabuf_ops = {
      .vunmap = amdxdna_ubuf_vunmap,
  };
  +static int readonly_va_entry(struct amdxdna_drm_va_entry *va_ent)
+{
+    struct mm_struct *mm = current->mm;
+    struct vm_area_struct *vma;
+    int ret;
+
+    mmap_read_lock(mm);
+
+    vma = find_vma(mm, va_ent->vaddr);
+    if (!vma ||
+        vma->vm_start > va_ent->vaddr ||
+        vma->vm_end < va_ent->vaddr ||
+        vma->vm_end - va_ent->vaddr < va_ent->len)
+        ret = -ENOENT;

The check on line "vma->vm_end < va_ent->vaddr" appears to be unreachable.
find_vma() is documented to return the first VMA where vma->vm_end > addr, so
if vma is non-NULL, this condition can never be true.
Sure. I will remove it.

+    else
+        ret = vma->vm_flags & VM_WRITE ? 0 : 1;
+
+    mmap_read_unlock(mm);
+    return ret;
+}
+
  struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
                   u32 num_entries, void __user *va_entries)
  {
@@ -134,6 +155,7 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
      struct amdxdna_ubuf_priv *ubuf;
      u32 npages, start = 0;
      struct dma_buf *dbuf;
+    bool readonly = true;
      int i, ret;
      DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
  @@ -172,6 +194,10 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
              ret = -EINVAL;
              goto free_ent;
          }
+
+        /* Pin pages as writable as long as not all entries are read-only. */
+        if (readonly && readonly_va_entry(&va_ent[i]) != 1)
+            readonly = false;
      }

The check "!= 1" treats errors the same as writable VMAs. readonly_va_entry()
returns -ENOENT for errors, 0 for writable, and 1 for read-only.

Extra error handling maybe needed?

The idea is to specifically handle the readonly_va_entry() == 1 case. For -ENOENT and 0, it fallbacks to writeable and the error handling will be done by checking pin_user_pages_fast() return value.
OK.


Lizhi


        ubuf->nr_pages = exp_info.size >> PAGE_SHIFT;
@@ -194,7 +220,7 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
          npages = va_ent[i].len >> PAGE_SHIFT;
            ret = pin_user_pages_fast(va_ent[i].vaddr, npages,
-                      FOLL_WRITE | FOLL_LONGTERM,
+                      (readonly ? 0 : FOLL_WRITE) | FOLL_LONGTERM,
                        &ubuf->pages[start]);
          if (ret >= 0) {
              start += ret;
@@ -211,7 +237,7 @@ struct dma_buf *amdxdna_get_ubuf(struct drm_device *dev,
        exp_info.ops = &amdxdna_ubuf_dmabuf_ops;
      exp_info.priv = ubuf;
-    exp_info.flags = O_RDWR | O_CLOEXEC;
+    exp_info.flags = (readonly ? O_RDONLY : O_RDWR) | O_CLOEXEC;
        dbuf = dma_buf_export(&exp_info);
      if (IS_ERR(dbuf)) {