[nova] NUMA scheduling
We have been running with NUMA configured for a long time and don't believe I have seen this behavior. It's important that you configure the flavors / aggregates correct.
I think this might be what you are looking for
penstack flavor set m1.large --property hw:cpu_policy=dedicated
Pretty sure we also set this for any flavor that only requires a single NUMA zone
openstack flavor set m1.large --property hw:numa_nodes=1
From: Eric K. Miller <emiller at genesishosting.com>
Sent: Friday, October 16, 2020 8:47 PM
To: Laurent Dumont <laurentfdumont at gmail.com>
Cc: openstack-discuss <openstack-discuss at lists.openstack.org>
Subject: RE: [nova] NUMA scheduling
> As far as I know, numa_nodes=1 just means --> the resources for that VM should run on one NUMA node (so either NUMA0 or NUMA1). If there is space free on both, then it's probably going to pick one of the two?
I thought the same, but it appears that VMs are never scheduled on NUMA1 even though NUMA0 is full (causing OOM to trigger and kill running VMs). I would have hoped that a NUMA node was treated like a host, and thus "VMs being balanced across nodes".
The discussion on NUMA handling is long, so I was hoping that there might be information about the latest solution to the problem - or to be told that there isn't a good solution other than using huge pages.
-------------- next part --------------
An HTML attachment was scrubbed...