[all][interop][cinder][qa] API changes with/withoutmicroversion and Tempest verification of API interoperability
---- On Tue, 17 Sep 2019 01:41:23 -0700 <zhu.fanglei at zte.com.cn> wrote ----
> Seems we can hardly reach an agreement about whether to use microverion for fields added in response, but, I think for tempest, things are simpler, we can add schema check according to the api-ref, and if some issues are found (like groups field) in older version, we can simply remove that field from required fields. That won't happen very often.
I do not think we should do that. When I proposed the idea of doing the strict JSON schema validation for volume API, main goal was to block any backward compatible or non-interoperable changes in API strictly. Compute JSON schema strict validation is good example to show the benefits of doing that.
Idea behind strict validation is we have a predefined schema for API response with its optional/mandatory fields, name, type, range etc and compare with the API actual response and see if any API field is added, removed, type changed, range changed etc.
If we do not do 'required' field or AdditionalPropoerties=False then, there is no meaning of strict validation. I will leave those API from strict validation. I have commented the same in your all open patches also. JSON schema strict validation has to be completely strict validation or no validation.
As you mentioned about the api-ref, I do not think we have well-defined api-ref for volume case. You can see that during Tempest schema implementation, you have fixed 18 bugs in api-ref. I always go with what code return as code is what end user get response from.
> Original MailSender: GhanshyamMann <gmann at ghanshyammann.com>To: Sean Mooney <smooney at redhat.com>;CC: Sean McGinnis <sean.mcginnis at gmx.com>;Matt Riedemann <mriedemos at gmail.com>;openstack-discuss <openstack-discuss at lists.openstack.org>;Date: 2019/09/17 08:08Subject: Re: [all][interop][cinder][qa] API changes with/withoutmicroversion and Tempest verification of API interoperability ---- On Tue, 17 Sep 2019 07:59:19 +0900 Sean Mooney <smooney at redhat.com> wrote ----
> > On Mon, 2019-09-16 at 17:11 -0500, Sean McGinnis wrote:
> > > >
> > > > Backend/type specific information leaking out of the API dynamically like
> > > > that is definitely an interoperability problem and as you said it sounds
> > > > like it's been that way for a long time. The compute servers diagnostics API
> > > > had a similar problem for a long time and the associated Tempest test for
> > > > that API was disabled for a long time because the response body was
> > > > hypervisor specific, so we eventually standardized it in a microversion so
> > > > it was driver agnostic.
> > > >
> > >
> > > Except this isn't backend specific information that is leaking. It's just
> > > reflecting the configuration of the system.
> > yes and config driven api behavior is also an iterop problem.
> > ideally you should not be able to tell if cinder is abcked by ceph or emc form the
> > api responce at all.
> > sure you might have a volume type call ceph and another called emc but both should be
> > report capasty in the same field with teh same unit.
> > ideally you would have a snapshots or gigabytes quota and option ly associate that with a volume types
> > but shanshot_ceph is not interoperable aross could if that exstis with that name solely becaue ceph was used on the
> > backend. as a client i would have to look at snapshost* to figure out my quotat and in princiapal that is an ubounded
> > set.
> Yeah and this is real pain point for end-user or app using API directly. Dynamic API behaviour base don system configuration is interoperability issue.
> In bug#1687538 case, new field is going to be reflected for the same backend and same configuration Cloud. Cloud provider upgrade their cloud from ocata->anything and user will start getting the new field without any mechanism to discover whether that field is expected to be present or not.
> > >