osdir.com


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[placement][nova][ptg] resource provider affinity


we miss the maillist address, sorry I start that one....add it back.

Alex Xu <soulxu at gmail.com> äº?2019å¹´4æ??16æ?¥å?¨äº? ä¸?å??7:16å??é??ï¼?

>
>
> Sean Mooney <smooney at redhat.com> äº?2019å¹´4æ??16æ?¥å?¨äº? ä¸?å??2:27å??é??ï¼?
>
>> On Mon, 2019-04-15 at 23:16 +0800, Alex Xu wrote:
>> >
>> > >
>> > > ?resources=DISK_GB&
>> > > resources1=VCPU:2,MEMORY_MB:128&
>> > > resources1.1=VF:1&
>> > > resources2=VCPU:2,MEMORY_MB:128&
>> > > resources2.1=VF:1&
>> > > group_policy=isolate
>> > >
>> > > Is it the case you talking about? Sorry, I probably didn't get what
>> you
>> > > mean about changing grouping and group policies. Is there any
>> conflict case
>> > > from you vision?
>> > >
>> >
>> > Sorry, I miss read your case. It should be
>> >
>> > ?resources=DISK_GB:10,VF:1&
>> > resources1=VCPU:2,MEMORY_MB:128&
>> > resources2=VCPU:2,MEMORY_MB:128&
>> > group_policy=isolate
>> >
>> > VF may get from any RP in the whole tree.
>> no that wont work because it would require the disk_GB and the VF to come
>> form the same resouce provideer.
>>  so you would have to do
>>
>
> No, that isn't un-numbered resources meaning.  DISK_GB and VF are in
> un-numbered request group. It may get from any RP in the whole tree, and
> needn't to be same resource provider.
>
>
> http://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/granular-resource-requests.html#semantics
> "The semantic for the (single) un-numbered grouping is unchanged. That is,
> it may still return results from different RPs in the same tree (or, when
> â??sharedâ?? is fully implemented, the same aggregate)."
>
>
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> group_policy=isolate
>>
>> the issue aries if i want 2 VF
>>
>> do you do
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> resoucees4=VF:1
>> group_policy=isolate
>>
>> or
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:2
>> group_policy=isolate
>>
>> or
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> resoucees4=VF:1
>> grou
>> p_policy=None
>>
>> i woudl say the last one is the most correct from the neutorn point of
>> view
>> however we lose guarentee teh cpu and ram come form different numa node
>> the first option forces the vif to be form different RP and the second
>> requires them to
>> be form the same RPs
>>
>> what you really want is
>> ?resources=DISK_GB:10&
>> resources1=VCPU:2,MEMORY_MB:128&
>> resources2=VCPU:2,MEMORY_MB:128&
>> resoucees3=VF:1
>> resoucees4=VF:1
>> grou
>> p_policy=isolate;none:3,4
>>
>> i.e. the vfs can come form any RP in the tree but resouce group 1 an 2
>> need to be isolated.
>> or said another way by default each resouce group is isolated but resouce
>> groups 3 and 4 have policy none.
>>
>> >
>> >
>> >
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190416/ad24f8d7/attachment-0001.html>