Re: [LSF/MM/BPF TOPIC][RFC PATCH v4 00/27] Private Memory Nodes (w/ Compressed RAM)

From: David Hildenbrand (Arm)

Date: Tue Mar 17 2026 - 09:26:04 EST


On 2/22/26 09:48, Gregory Price wrote:
> Topic type: MM

Hi Gregory,

stumbling over this again, some questions whereby I'll just ignore the
compressed RAM bits for now and focus on use cases where promotion etc
are not relevant :)

[...]

>
> TL;DR
> ===
>
> N_MEMORY_PRIVATE is all about isolating NUMA nodes and then punching
> explicit holes in that isolation to do useful things we couldn't do
> before without re-implementing entire portions of mm/ in a driver.

Just to clarify: we don't currently have any mechanism to expose, say,
SPM/PMEM/whatsoever to the buddy allocator through the dax/kmem driver
and *not* have random allocations end up on it, correct?

Assume we online the memory to ZONE_MOVABLE, still other (fallback)
allocations might end up on that memory.

How would we currently handle something like that? (do we have drivers
for that? I'd assume that drivers would only migrate some user memory to
ZONE_DEVICE memory.)

Assuming we don't have such a mechanism, I assume that part of your
proposal would be very interesting: online the memory to a
"special"/"restricted" (you call it private) NUMA node, whereby all
memory of that NUMA node will only be consumable through
mbind() and friends.

Any other allocations (including automatic page migration etc) would not
end up on that memory.

Thinking of some "terribly slow" or "terribly fast" memory that we don't
want to involve in automatic memory tiering, being able to just let
selected workloads consume that memory sounds very helpful.


(wondering if there could be some way allocations might get migrated out
of the node, for example, during memory offlining etc, which might also
not be desirable)

I am not sure if __GFP_PRIVATE etc is really required for that. But some
mechanism to make that work seems extremely helpful.

Because ...

>
>
> /* This is my memory. There are many like it, but this one is mine. */
> rc = add_private_memory_driver_managed(nid, start, size, name, flags,
> online_type, private_context);
>
> page = alloc_pages_node(nid, __GFP_PRIVATE, 0);
>
> /* Ok but I want to do something useful with it */
> static const struct node_private_ops ops = {
> .migrate_to = my_migrate_to,
> .folio_migrate = my_folio_migrate,
> .flags = NP_OPS_MIGRATION | NP_OPS_MEMPOLICY,
> };
> node_private_set_ops(nid, &ops);
>
> /* And now I can use mempolicy with my memory */
> buf = mmap(...);
> mbind(buf, len, mode, private_node, ...);
> buf[0] = 0xdeadbeef; /* Faults onto private node */

... just being able to consume that memory through mbind() and having
guarantees sounds extremely helpful.

[...]

>
>
> Background
> ===
>
> Today, drivers that want mm-like services on non-general-purpose
> memory either use ZONE_DEVICE (self-managed memory) or hotplug into
> N_MEMORY and accept the risk of uncontrolled allocation.
>
> Neither option provides what we really want - the ability to:
> 1) selectively participate in mm/ subsystems, while
> 2) isolating that memory from general purpose use.
>
> Some device-attached memory cannot be managed as fully general-purpose
> system RAM. CXL devices with inline compression, for example, may
> corrupt data or crash the machine if the compression ratio drops
> below a threshold -- we simply run out of physical memory.
>
> This is a hard problem to solve: how does an operating system deal
> with a device that basically lies about how much capacity it has?
>
> (We'll discuss that in the CRAM section)
>
>
> Core Proposal: N_MEMORY_PRIVATE
> ===
>
> Introduce N_MEMORY_PRIVATE, a NUMA node state for memory managed by
> the buddy allocator, but excluded from normal allocation paths.
>
> Private nodes:
>
> - Are filtered from zonelist fallback: all existing callers to
> get_page_from_freelist cannot reach these nodes through any
> normal fallback mechanism.

Good.

>
> - Filter allocation requests on __GFP_PRIVATE
> numa_zone_allowed() excludes them otherwise.

I think we discussed that in the past, but why can't we find a way that
only people requesting __GFP_THISNODE could allocate that memory, for
example? I guess we'd have to remove it from all "default NUMA bitmaps"
somehow.

>
> Applies to systems with and without cpusets.
>
> GFP_PRIVATE is (__GFP_PRIVATE | __GFP_THISNODE).
>
> Services use it when they need to allocate specifically from
> a private node (e.g., CRAM allocating a destination folio).
>
> No existing allocator path sets __GFP_PRIVATE, so private nodes
> are unreachable by default.
>
> - Use standard struct page / folio. No ZONE_DEVICE, no pgmap,
> no struct page metadata limitations.

Good.

>
> - Use a node-scoped metadata structure to accomplish filtering
> and callback support.
>
> - May participate in the buddy allocator, reclaim, compaction,
> and LRU like normal memory, gated by an opt-in set of flags.
>
> The key abstraction is node_private_ops: a per-node callback table
> registered by a driver or service.
>
> Each callback is individually gated by an NP_OPS_* capability flag.
>
> A driver opts in only to the mm/ operations it needs.
>
> It is similar to ZONE_DEVICE's pgmap at a node granularity.
>
> In fact...
>
>
> Re-use of ZONE_DEVICE Hooks
> ===

I think all of that might not be required for the simplistic use case I
mentioned above (fast/slow memory only to be consumed by selected user
space that opts in through mbind() and friends).

Or are there other use cases for these callbacks

[...]
>
>
> Flag-gated behavior (NP_OPS_*) controls:
> ===
>
> We use OPS flags to denote what mm/ services we want to allow on our
> private node. I've plumbed these through so far:
>
> NP_OPS_MIGRATION - Node supports migration
> NP_OPS_MEMPOLICY - Node supports mempolicy actions
> NP_OPS_DEMOTION - Node appears in demotion target lists
> NP_OPS_PROTECT_WRITE - Node memory is read-only (wrprotect)
> NP_OPS_RECLAIM - Node supports reclaim
> NP_OPS_NUMA_BALANCING - Node supports numa balancing
> NP_OPS_COMPACTION - Node supports compaction
> NP_OPS_LONGTERM_PIN - Node supports longterm pinning
> NP_OPS_OOM_ELIGIBLE - (MIGRATION | DEMOTION), node is reachable
> as normal system ram storage, so it should
> be considered in OOM pressure calculations.

I have to think about all that, and whether that would be required as a
first step. I'd assume in a simplistic use case mentioned above we might
only forbid the memory to be used as a fallback for any oom etc.

Whether reclaim (e.g., swapout) makes sense is a good question.


--
Cheers,

David