[placement][nova][ptg] NUMA Topology with placement
>From the cross-project etherpad 
* Spec: https://review.openstack.org/#/c/552924/
This is probably the biggest topic, in the sense that modeling NUMA
in placement and how we do that has a big impact across a large
number of other pending features, including several specs that state
things like "this would be different if we had NUMA in placement".
Similarly, if we do have NUMA in placement, we also end up with
questions about and requirements with:
* json payload for getting allocation candidates:
* increased complexity in protecting driver provided traits
* resource provider - request group mapping
* resource providers with traits but no resources
* resource provider (subtree) affinity
And there's probably cascades over to dedicated cpus, cpu
capabilities, network bandwidth mgt, etc etc.
>From the placement perspective, the problem isn't representing the
NUMA info in placement, it's getting candidates back out in a useful
fashion once they are in there, so the resources can be claimed. It
would be useful if someone could make explicit and enumerate:
* What (if any) ways in which the current handling of nested
providers does not support _writing_ NUMA-related info to placement.
* What (we know there are some) ways the current handling of allocation
candidates and the underlying database queries do not support
effective use of NUMA info once it is in placement.
Chris Dent Ù©â??Ì¯â??Û¶ https://anticdent.org/
freenode: cdent tw: @anticdent