Re: [RFC PATCH 2/4] mm/memory-tiers: introduce socket-aware topology management for NUMA nodes

From: Jonathan Cameron

Date: Wed Mar 18 2026 - 08:23:24 EST


On Mon, 16 Mar 2026 14:12:50 +0900
Rakie Kim <rakie.kim@xxxxxx> wrote:

> The existing NUMA distance model provides only relative latency values
> between nodes and lacks any notion of structural grouping such as socket
> or package boundaries. As a result, memory policies based solely on
> distance cannot differentiate between nodes that are physically local
> to the same socket and those that belong to different sockets. This
> often leads to inefficient cross-socket demotion and suboptimal memory
> placement.
>
> This patch introduces a socket-aware topology management layer that
> groups NUMA nodes according to their physical package (socket)
> association. Each group forms a "memory package" that explicitly links
> CPU and memory-only nodes (such as CXL or HBM) under the same socket.
> This structure allows the kernel to interpret NUMA topology in a way
> that reflects real hardware locality rather than relying solely on
> flat distance values.
>
> By maintaining socket-level grouping, the kernel can:
> - Enforce demotion and promotion policies that stay within the same
> socket.
> - Avoid unintended cross-socket migrations that degrade performance.
> - Provide a structural abstraction for future policy and tiering logic.
>
> Unlike ACPI-provided distance tables, which offer static and symmetric
> relationships, this socket-aware model captures the true hardware
> hierarchy and provides a flexible foundation for systems where the
> distance matrix alone cannot accurately express socket boundaries or
> asymmetric topologies.

Careful with the generalities in here. There is no way to derive the
'true' hierarchy. What this is doing is applying a particular set
of heuristics to the data that ACPI provided and attempting to use
that to derive relationships. In simple cases that might work fine.0

Doing so is OK in an RFC for discussion but this will need testing
against a wide range of topologies to at least ensure it fails gracefully.
Note we've had to paper over quite a few topology assumptions in the
kernel and this feels like another one that will bite us later.

I'd avoid the socket terminology as multiple NUMA nodes in sockets
have been a thing for many years. Today there can even be multiple
IO dies with a complex 'distance' relationship wrt to the CPUs
in that socket. Topologies of memory controllers in those
packages are another level of complexity.


Otherwise a few general things from a quick look.

I'd avoid goto out; where out just returns. That just makes code
flow more complex and often makes for longer code. When you have
an error and there is nothing to cleanup just return immediately.

guard() / scoped_guard() will help simplify some of the locking.

Thanks,

Jonathan