[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[placement][ptg] Allocation Partitioning

On 04/08/2019 10:25 AM, Chris Dent wrote:
>  From the etherpad [1]:
> * do we need this?
> * what is it?
> * who is going to drive it?
> As as I recall, allocation partitioning (distinct from
> resource provider partitioning) is a way of declaring that a set of
> allocations below to a specific service. This is useful as a way,
> for example, of counting instances spawned via nova. Right now it is
> possible to count vcpus, ram and disk but if something besides nova
> is making allocations using those resource classes how do we
> distinguish?

This is more "consumer type" than anything else. In order to satisfy 
usage requests for certain classes of quota (really, just "num 
instances"), we need to know which consumers in the consumers table are 
"Nova instances" and which are not.

Obviously, all consumers are currently Nova instances in placement since 
(AFAIK) no other services have begun using placement to store allocations.

And because placement doesn't have such concepts as "local deleted" or 
"soft deleted allocations" we could actually accurately count the number 
of instances using a single simple query:

  SELECT COUNT(DISTINCT a.consumer_id)
  FROM allocations AS a
  JOIN consumers AS c
  ON a.consumer_id = c.id
  WHERE c.project_id = $PROJECT_ID

> Of course it's important to also ask "should we distinguish?". If
> there's a concept of unified limits does it matter whether a VCPU is
> consume by _this_ nova or something else if they are consumed by the
> same user?

So this is referring to resource provider partition (source partition), 
not consumer type. For the problem of detecting whether an instance is 
in *this* Nova or another Nova deployment that uses the same placement 
service, we need a source partition identifier in the resource_providers 

Also keep in mind that Keystone's unified limits are still divisible by 
*region*, which would serve as a natural source partition identifier I 

> This functionality is closely tied to resource provider
> partitioning. In a complex placement scenario, where placement is
> managing multiple instance spawning tools, in multiple cloud-like
> things, it seems like both would be needed.

Yes, this. In order to get the number of Nova instances in a specific 
deployment of Nova (or region), the above query would instead look like 

  SELECT COUNT(DISTINCT a.consumer_id)
  FROM allocations AS a
  JOIN consumers AS c
  ON a.consumer_id = c.id
  JOIN resource_providers AS rp
  ON a.resource_provider_id
  WHERE c.project_id = $PROJECT_ID
  AND rp.source_partition_id = $NOVA_REGION_ID;

> The ongoing work to implement quota counting in placement [2] has a
> workaround for instances not being counted in placement, but the
> "more than one nova per placement" limitation has to be documented.
> How urgent is this? Is there anyone available to do the work this
> cycle? How damaging is it to punt to U? What details are missing in
> the above description?

Not sure of the answers to this question. As with most features like 
this, 95% of the work seems to end up being in the upgrade path and not 
adding the feature itself. For both the consumer type and source 
partition identification, the upgrade path would entail setting default 
consumer type for all existing consumers and source partition 
identifiers for all existing resource providers.


> [1] https://etherpad.openstack.org/p/placement-ptg-train
> [2] 
> https://review.openstack.org/#/q/topic:bp/count-quota-usage-from-placement