Re: [PATCH v13 05/12] dmaengine: qcom: bam_dma: add support for BAM locking

From: Manivannan Sadhasivam

Date: Mon Mar 23 2026 - 09:45:02 EST


On Mon, Mar 23, 2026 at 02:36:59PM +0100, Bartosz Golaszewski wrote:
> On Mon, Mar 23, 2026 at 10:35 AM Manivannan Sadhasivam <mani@xxxxxxxxxx> wrote:
> >
> > On Tue, Mar 17, 2026 at 03:02:12PM +0100, Bartosz Golaszewski wrote:
> > > Add support for BAM pipe locking. To that end: when starting DMA on an RX
> > > channel - prepend the existing queue of issued descriptors with an
> > > additional "dummy" command descriptor with the LOCK bit set. Once the
> > > transaction is done (no more issued descriptors), issue one more dummy
> > > descriptor with the UNLOCK bit.
> >
> > I've left some comments in v12, but looks like you've missed them.
>
> Sorry for that, as I explained in private, this email did not end up
> in my inbox and I didn't see it on lore.
>
> > >
> > > +static int bam_metadata_attach(struct dma_async_tx_descriptor *desc, void *data, size_t len)
> > > +{
> > > + struct bam_chan *bchan = to_bam_chan(desc->chan);
> > > + const struct bam_device_data *bdata = bchan->bdev->dev_data;
> > > + struct bam_desc_metadata *metadata = data;
> > > +
> > > + if (!data)
> > > + return -EINVAL;
> > > +
> > > + if (!bdata->pipe_lock_supported)
> > > + return -EOPNOTSUPP;
> >
> > As mentioned in v12, you should return 0 to avoid erroring out the clients if
> > pipe lock is not supported.
> >
>
> If the client attaches the scratchpad register then it probably does
> want to use locking, right? On the other hand, I assume you're
> thinking about a situation where the client wants locking but BAM does
> not support it. It's unlikely but ok, I'll change it.
>

Locking is supported only since v1.4.0. So I was trying to avoid erroring out
the clients wanting to use DMA on older platforms (pre 1.4.0). I'm not sure if
such platforms exist, but could be possible.

- Mani

> > >
> > > +static struct bam_async_desc *
> > > +bam_make_lock_desc(struct bam_chan *bchan, struct scatterlist *sg,
> > > + struct bam_cmd_element *ce, unsigned long flag)
> > > +{
> > > + struct dma_chan *chan = &bchan->vc.chan;
> > > + struct bam_async_desc *async_desc;
> > > + struct bam_desc_hw *desc;
> > > + struct virt_dma_desc *vd;
> > > + struct virt_dma_chan *vc;
> > > + unsigned int mapped;
> > > + dma_cookie_t cookie;
> > > + int ret;
> > > +
> > > + sg_init_table(sg, 1);
> > > +
> > > + async_desc = kzalloc_flex(*async_desc, desc, 1, GFP_NOWAIT);
> > > + if (!async_desc) {
> > > + dev_err(bchan->bdev->dev, "failed to allocate the BAM lock descriptor\n");
> > > + return NULL;
> > > + }
> > > +
> > > + async_desc->num_desc = 1;
> > > + async_desc->curr_desc = async_desc->desc;
> > > + async_desc->dir = DMA_MEM_TO_DEV;
> > > +
> > > + desc = async_desc->desc;
> > > +
> > > + bam_prep_ce_le32(ce, bchan->scratchpad_addr, BAM_WRITE_COMMAND, 0);
> > > + sg_set_buf(sg, ce, sizeof(*ce));
> > > +
> > > + mapped = dma_map_sg_attrs(chan->slave, sg, 1, DMA_TO_DEVICE, DMA_PREP_CMD);
> > > + if (!mapped) {
> > > + kfree(async_desc);
> > > + return NULL;
> > > + }
> > > +
> > > + desc->flags |= cpu_to_le16(DESC_FLAG_CMD | flag);
> > > + desc->addr = sg_dma_address(sg);
> > > + desc->size = sizeof(struct bam_cmd_element);
> > > +
> > > + vc = &bchan->vc;
> > > + vd = &async_desc->vd;
> > > +
> > > + dma_async_tx_descriptor_init(&vd->tx, &vc->chan);
> > > + vd->tx.flags = DMA_PREP_CMD;
> > > + vd->tx.desc_free = vchan_tx_desc_free;
> > > + vd->tx_result.result = DMA_TRANS_NOERROR;
> > > + vd->tx_result.residue = 0;
> > > +
> > > + cookie = dma_cookie_assign(&vd->tx);
> > > + ret = dma_submit_error(cookie);
> > > + if (ret)
> > > + return NULL;
> >
> > You are leaking async_desc here.
> >
>
> Yeah, not only that but also should unmap the sg here too. Thanks.
>
> > > +
> > > + return async_desc;
> > > +}
> > > +
> > > +static int bam_do_setup_pipe_lock(struct bam_chan *bchan, bool lock)
> > > +{
> > > + struct bam_device *bdev = bchan->bdev;
> > > + const struct bam_device_data *bdata = bdev->dev_data;
> > > + struct bam_async_desc *lock_desc;
> > > + struct bam_cmd_element *ce;
> > > + struct scatterlist *sgl;
> > > + unsigned long flag;
> > > +
> > > + lockdep_assert_held(&bchan->vc.lock);
> > > +
> > > + if (!bdata->pipe_lock_supported || !bchan->scratchpad_addr ||
> > > + bchan->slave.direction != DMA_MEM_TO_DEV)
> > > + return 0;
> > > +
> > > + if (lock) {
> > > + sgl = &bchan->lock_sg;
> > > + ce = &bchan->lock_ce;
> > > + flag = DESC_FLAG_LOCK;
> > > + } else {
> > > + sgl = &bchan->unlock_sg;
> > > + ce = &bchan->unlock_ce;
> > > + flag = DESC_FLAG_UNLOCK;
> > > + }
> > > +
> > > + lock_desc = bam_make_lock_desc(bchan, sgl, ce, flag);
> > > + if (!lock_desc)
> > > + return -ENOMEM;
> > > +
> > > + if (lock)
> > > + list_add(&lock_desc->vd.node, &bchan->vc.desc_issued);
> > > + else
> > > + list_add_tail(&lock_desc->vd.node, &bchan->vc.desc_issued);
> > > +
> > > + bchan->locked = lock;
> >
> > What is this flag for?
> >
>
> Just a leftover. I'll drop it, thanks.
>
> > >
> > > +struct bam_desc_metadata {
> > > + phys_addr_t scratchpad_addr;
> >
> > I think it'd be worth adding a comment for this.
> >
>
> Will do.
>
> Bart

--
மணிவண்ணன் சதாசிவம்