Re: [net-next,4/4] octeontx2-pf: cn20k: Use unified Halo context

From: Simon Horman

Date: Wed Mar 18 2026 - 11:18:36 EST


This is an AI-generated review of your patch. The human sending this
email has considered the AI review valid, or at least plausible.

For local reproduction steps see https://netdev-ai.bots.linux.dev/ai-local.html
---
octeontx2-pf: cn20k: Use unified Halo context

Use unified Halo context present in CN20K hardware for
octeontx2 netdevs instead of aura and pool contexts.

> diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c
> index a5a8f4558717..db2742588bf6 100644
> --- a/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c
> +++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c

[ ... ]

> +int cn20k_npa_alloc_dpc(struct otx2_nic *nic)
> +{
> + struct npa_cn20k_dpc_alloc_req *req;
> + struct npa_cn20k_dpc_alloc_rsp *rsp;
> + int err;
> +
> + req = otx2_mbox_alloc_msg_npa_cn20k_dpc_alloc(&nic->mbox);
> + if (!req)
> + return -ENOMEM;
> +
> + /* Count successful ALLOC requests only */
> + req->dpc_conf = 1ULL << 4;
> +
> + err = otx2_sync_mbox_msg(&nic->mbox);
> + if (err)
> + return err;
> +
> + rsp = (struct npa_cn20k_dpc_alloc_rsp *)otx2_mbox_get_rsp(&nic->mbox.mbox,
> + 0, &req->hdr);
> +
> + nic->npa_dpc = rsp->cntr_id;
^^^^

Can this dereference crash? The otx2_mbox_get_rsp() call can return
ERR_PTR(-ENODEV) on error. In mbox.c, otx2_mbox_get_rsp() returns an error
pointer if there's a message count mismatch or ID mismatch:

drivers/net/ethernet/marvell/octeontx2/af/mbox.c:otx2_mbox_get_rsp() {
...
if (!mbox->dev->msgs_acked)
return ERR_PTR(-ENODEV);
...
if (msgs_acked != 1)
return ERR_PTR(-ENODEV);
...
}

Every other caller of otx2_mbox_get_rsp() in this driver checks IS_ERR()
before dereferencing:

drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c:cn10k_alloc_leaf_profile() {
...
rsp = (struct nix_bandprof_get_hwinfo_rsp *)
otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
if (IS_ERR(rsp))
return PTR_ERR(rsp);
...
}

drivers/net/ethernet/marvell/octeontx2/nic/cn20k.c:__cn20k_tc_alloc_entry() {
...
rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp(...);
if (IS_ERR(rsp))
return PTR_ERR(rsp);
...
}

drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c:otx2_alloc_mcam_entries() {
...
rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp(...);
if (IS_ERR(rsp)) {
...
return PTR_ERR(rsp);
}
...
}

> +
> + return 0;
> +}
> +
> +int cn20k_npa_free_dpc(struct otx2_nic *nic)
> +{
> + struct npa_cn20k_dpc_free_req *req;
> + int err;
> +
> + req = otx2_mbox_alloc_msg_npa_cn20k_dpc_free(&nic->mbox);
> + if (!req)
> + return -ENOMEM;
> +
> + req->cntr_id = nic->npa_dpc;
> +
> + err = otx2_sync_mbox_msg(&nic->mbox);
> +
> + return err;
> +}

Does this need the mbox lock? Every other function in
otx2_free_hw_resources() that performs mbox operations acquires the
mbox lock.

For example, otx2_ctx_disable() does:

drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c:otx2_ctx_disable() {
...
mutex_lock(&mbox->lock);
if (nixlf) {
...
otx2_sync_mbox_msg(mbox);
}
...
mutex_unlock(&mbox->lock);
}

And cn10k_free_all_ipolicers() does:

drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c:cn10k_free_all_ipolicers() {
...
mutex_lock(&pfvf->mbox.lock);
...
otx2_sync_mbox_msg(&pfvf->mbox);
mutex_unlock(&pfvf->mbox.lock);
}

In contrast, cn20k_npa_alloc_dpc() is called from otx2_init_hw_resources()
under the mbox lock:

drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c:otx2_init_hw_resources() {
...
mutex_lock(&pf->mbox.lock);
...
if (is_cn20k(pf->pdev))
cn20k_npa_alloc_dpc(pf);
...
mutex_unlock(&pf->mbox.lock);
}

Without the lock here in cn20k_npa_free_dpc(), could concurrent mbox access
from VF mbox forwarding corrupt the shared mbox buffer? VF mbox forwarding
runs from the mbox_pfvf_wq workqueue via otx2_forward_vf_mbox_msgs().

[ ... ]