OSDir


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: QA signup


I don't think anyone has mentioned this yet but we probably want to
consider releasing 4.0 alpha jars to maven central soon so the open
source ecosystem can start testing a consistent Cassandra 4.0; for
example I had to hack 4.0 into Priam's build [1] by manually building
a jar and checking it in which is ... not particularly good or
reproducible for others. I'm not sure how hard it would be but
supporting periodic SNAPSHOT releases would also at least allow
building against trunk and would be great too. It also might be a good
idea to have a document (confluence page?) of breaking changes that
are most likely to require a change from users. For example the
SeedProvider interface change is probably going to break almost
everyone's deployment (but is easy to fix), and having a central list
of removed yaml options might be helpful past the NEWs file.

Regarding testing areas, we deployed trunk in the Netflix testing
environment on Wednesday with the aim to test the netty internode
messaging subsystem on 200+ node clusters. We are working with Jason,
Dinesh, and Jordan and have already found some interesting results and
would like to write them down as well as working on establishing good
baselines and testing methodology for stressing that subsystem. Is the
consensus here to create Jira epics tagged with 4.0 blocker for each
subsystem, or confluence pages (if confluence I think we need to give
people permissions to add pages?)?

Other areas we can help test and are looking for collaborators on are
audit/full query logging and we are potentially interested in helping
to test repair, but our internal implementation doesn't support
Cassandra 4.x ... we can re-work the CASSANDRA-14346 patch without too
much effort I think to thoroughly test full/incremental repair at any
scale cluster (or maybe Reaper folks can test repair).

[1] https://github.com/Netflix/Priam/pull/713/files#diff-3c33bef9f0334cf724470d50eae8dd8b

-Joey

On Fri, Sep 7, 2018 at 9:57 AM Jonathan Haddad <jon@xxxxxxxxxxxxx> wrote:
>
> Really good idea JD. Keeping all the tests under an umbrella ticket for the
> feature with everything linked back makes a lot of sense.
>
> On Thu, Sep 6, 2018 at 11:09 PM J. D. Jordan <jeremiah.jordan@xxxxxxxxx>
> wrote:
>
> > I would suggest that JIRA’s tagged as 4.0 blockers be created for the list
> > once it is fleshed out.  Test plans and results could be posted to said
> > JIRAs, to be closed once a given test passes. Any bugs found can also then
> > be related back to such a ticket for tracking them as well.
> >
> > -Jeremiah
> >
> > > On Sep 6, 2018, at 12:27 PM, Jonathan Haddad <jon@xxxxxxxxxxxxx> wrote:
> > >
> > > I completely agree with you, Sankalp.  I didn't want to dig too deep into
> > > the underlying testing methodology (and I still think we shouldn't just
> > > yet) but if the goal is to have confidence in the release, our QA process
> > > needs to be comprehensive.
> > >
> > > I believe that having focused teams for each component with a team leader
> > > with support from committers & contributors gives us the best shot at
> > > defining large scale functional tests that can be used to form both
> > > progress and bug reports.  (A person could / hopefully will be on more
> > than
> > > one team).  Coming up with those comprehensive tests will be the jobs of
> > > the teams, getting frequent bidirectional feedback on the dev ML.  Bugs
> > go
> > > in JIRA as per usual.
> > >
> > > Hopefully we can continue this process after the release, giving the
> > > project more structure, and folding more people in over time as
> > > contributors and ideally committers / PMC.
> > >
> > > Jon
> > >
> > >
> > >> On Thu, Sep 6, 2018 at 1:15 PM sankalp kohli <kohlisankalp@xxxxxxxxx>
> > wrote:
> > >>
> > >> Thanks for starting this Jon.
> > >> Instead of saying "I tested streaming", we should define what all was
> > >> tested like was all data transferred, what happened when stream failed,
> > >> etc.
> > >> Based on talking to a few users, looks like most testing is done by
> > doing
> > >> an operation or running a load and seeing if it "worked" and no errors
> > in
> > >> logs.
> > >>
> > >> Another important thing will be to fix bugs asap ahead of testing,  as
> > >> fixes can lead to more bugs :)
> > >>
> > >>>> On Thu, Sep 6, 2018 at 7:52 AM Jonathan Haddad <jon@xxxxxxxxxxxxx>
> > wrote:
> > >>>
> > >>> I was thinking along the same lines.  For this to be successful I think
> > >>> either weekly or bi-weekly summary reports back to the mailing list by
> > >> the
> > >>> team lead for each subsection on what's been tested and how it's been
> > >>> tested will help keep things moving along.
> > >>>
> > >>> In my opinion the lead for each team should *not* be the contributor
> > that
> > >>> wrote the feature, but someone who's very interested in it and can use
> > >> the
> > >>> contributor as a resource.  I think it would be difficult for the
> > >>> contributor to poke holes in their own work - if they could do that it
> > >>> would have been done already.  This should be a verification process
> > >> that's
> > >>> independent as possible from the original work.
> > >>>
> > >>> In addition to the QA process, it would be great if we could get a docs
> > >>> team together.  We've got quite a bit of undocumented features and
> > nuance
> > >>> still, I think hammering that out would be a good idea.  Mick brought
> > up
> > >>> updating the website docs in the thread on testing different JDK's [1],
> > >> if
> > >>> we could figure that out in the process we'd be in a really great
> > >> position
> > >>> from the user perspective.
> > >>>
> > >>> Jon
> > >>>
> > >>> [1]
> > >>
> > https://lists.apache.org/thread.html/5645178efb57939b96e73ab9c298e80ad8e76f11a563b4d250c1ae38@%3Cdev.cassandra.apache.org%3E
> > >>>
> > >>>>> On Thu, Sep 6, 2018 at 10:35 AM Jordan West <jordanrw@xxxxxxxxx>
> > wrote:
> > >>>>
> > >>>> Thanks for staring this thread Jon!
> > >>>>
> > >>>>> On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad <jon@xxxxxxxxxxxxx>
> > >>>> wrote:
> > >>>>
> > >>>>> For 4.0, I'm thinking it would be a good idea to put together a list
> > >> of
> > >>>> the
> > >>>>> things that need testing and see if people are willing to help test /
> > >>>> break
> > >>>>> those things.  My goal here is to get as much coverage as possible,
> > >> and
> > >>>> let
> > >>>>> folks focus on really hammering on specific things rather than just
> > >>>> firing
> > >>>>> up a cluster and rubber stamping it.  If we're going to be able to
> > >>>>> confidently deploy 4.0 quickly after it's release we're going to
> > >> need a
> > >>>>> high attention to detail.
> > >>>> +1 to a more coordinated effort. I think we could use the Confluence
> > >> that
> > >>>> was set up a little bit ago since it was setup for this purpose, at
> > >> least
> > >>>> for finalized plans and results:
> > >>>> https://cwiki.apache.org/confluence/display/CASSANDRA.
> > >>>>
> > >>>>
> > >>>>> In addition to a signup sheet, I think providing some guidance on how
> > >>> to
> > >>>> QA
> > >>>>> each thing that's being tested would go a long way.  Throwing "hey
> > >>> please
> > >>>>> test sstable streaming" over the wall will only get quality feedback
> > >>> from
> > >>>>> folks that are already heavily involved in the development process.
> > >> It
> > >>>>> would be nice to bring some new faces into the project by providing a
> > >>>>> little guidance.
> > >>>>
> > >>>>> We could help facilitate this even further by considering the people
> > >>>>> signing up to test a particular feature as a team, with seasoned
> > >>>> Cassandra
> > >>>>> veterans acting as team leads.
> > >>>>
> > >>>> +1 to this as well. I am always a fan of folks learning about a
> > >>>> subsystem/project through testing. It can be challenging to get folks
> > >> new
> > >>>> to a project excited about testing first but for those that do, or for
> > >>>> committers who want to learn another part of the db, its a great way
> > to
> > >>>> learn.
> > >>>>
> > >>>> Another thing we can do here is make sure teams are writing about the
> > >>>> testing they are doing and their results. This will help share
> > >> knowledge
> > >>>> about techniques and approaches that others can then apply. This
> > >>> knowledge
> > >>>> can be shared on the mailing list, a blog post, or in JIRA.
> > >>>>
> > >>>> Jordan
> > >>>>
> > >>>>
> > >>>>> Any thoughts?  I'm happy to take the lead on this.
> > >>>>> --
> > >>>>> Jon Haddad
> > >>>>> http://www.rustyrazorblade.com
> > >>>>> twitter: rustyrazorblade
> > >>>
> > >>>
> > >>> --
> > >>> Jon Haddad
> > >>> http://www.rustyrazorblade.com
> > >>> twitter: rustyrazorblade
> > >
> > >
> > > --
> > > Jon Haddad
> > > http://www.rustyrazorblade.com
> > > twitter: rustyrazorblade
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@xxxxxxxxxxxxxxxxxxxx
> > For additional commands, e-mail: dev-help@xxxxxxxxxxxxxxxxxxxx
> >
> > --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@xxxxxxxxxxxxxxxxxxxx
For additional commands, e-mail: dev-help@xxxxxxxxxxxxxxxxxxxx