Re: [PATCH] arm64: swiotlb: Don’t shrink default buffer when bounce is forced
From: Aneesh Kumar K . V
Date: Tue Mar 17 2026 - 01:29:30 EST
Marek Szyprowski <m.szyprowski@xxxxxxxxxxx> writes:
> On 06.02.2026 07:11, Aneesh Kumar K.V wrote:
>> Catalin Marinas <catalin.marinas@xxxxxxx> writes:
>>> On Tue, Jan 20, 2026 at 12:31:02PM +0530, Aneesh Kumar K.V (Arm) wrote:
>>>> arm64 reduces the default swiotlb size (for unaligned kmalloc()
>>>> bouncing) when it detects that no swiotlb bouncing is needed.
>>>>
>>>> If swiotlb bouncing is explicitly forced via the command line
>>>> (swiotlb=force), this heuristic must not apply. Add a swiotlb helper to
>>>> query the forced-bounce state and use it to skip the resize when
>>>> bouncing is forced.
>>> I think the logic you proposed in reply to Robin might work better but
>>> have you actually hit a problem that triggered this patch? Do people
>>> passing swiotlb=force expect a specific size for the buffer?
>>>
>> This issue was observed while implementing swiotlb for a trusted device.
>> I was testing the protected swiotlb space using the swiotlb=force
>> option, which causes the device to use swiotlb even in protected mode.
>> As per Robin, an end user using the swiotlb=force option will also
>> specify a custom swiotlb size
>
> Does the above mean that it works fine when user provides both
> swiotlb=force and custom swiotlb size, so no changes in the code are
> actually needed?
>
swiotlb_adjust_size() checks whether the default_nslabs value has
changed and avoids updating the SWIOTLB size based on different
subsystem logic.
void __init swiotlb_adjust_size(unsigned long size)
{
/*
* If swiotlb parameter has not been specified, give a chance to
* architectures such as those supporting memory encryption to
* adjust/expand SWIOTLB size for their use.
*/
if (default_nslabs != IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT)
return;
To handle swiotlb_force alone we can do
modified kernel/dma/swiotlb.c
@@ -209,6 +209,8 @@ unsigned long swiotlb_size_or_default(void)
void __init swiotlb_adjust_size(unsigned long size)
{
+ unsigned long nslabs;
+
/*
* If swiotlb parameter has not been specified, give a chance to
* architectures such as those supporting memory encryption to
@@ -218,7 +220,13 @@ void __init swiotlb_adjust_size(unsigned long size)
return;
size = ALIGN(size, IO_TLB_SIZE);
- default_nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ nslabs = ALIGN(size >> IO_TLB_SHIFT, IO_TLB_SEGSIZE);
+ /*
+ * Don't allow to reduce size if we are forcing swiotlb bounce.
+ */
+ if (swiotlb_force_bounce && nslabs < default_nslabs)
+ return;
+ default_nslabs = nslabs;
if (round_up_default_nslabs())
size = default_nslabs << IO_TLB_SHIFT;
pr_info("SWIOTLB bounce buffer size adjusted to %luMB", size >> 20);