Re: QA signup
Thanks for starting this Jon.
Instead of saying "I tested streaming", we should define what all was
tested like was all data transferred, what happened when stream failed,
Based on talking to a few users, looks like most testing is done by doing
an operation or running a load and seeing if it "worked" and no errors in
Another important thing will be to fix bugs asap ahead of testing, as
fixes can lead to more bugs :)
On Thu, Sep 6, 2018 at 7:52 AM Jonathan Haddad <jon@xxxxxxxxxxxxx> wrote:
> I was thinking along the same lines. For this to be successful I think
> either weekly or bi-weekly summary reports back to the mailing list by the
> team lead for each subsection on what's been tested and how it's been
> tested will help keep things moving along.
> In my opinion the lead for each team should *not* be the contributor that
> wrote the feature, but someone who's very interested in it and can use the
> contributor as a resource. I think it would be difficult for the
> contributor to poke holes in their own work - if they could do that it
> would have been done already. This should be a verification process that's
> independent as possible from the original work.
> In addition to the QA process, it would be great if we could get a docs
> team together. We've got quite a bit of undocumented features and nuance
> still, I think hammering that out would be a good idea. Mick brought up
> updating the website docs in the thread on testing different JDK's , if
> we could figure that out in the process we'd be in a really great position
> from the user perspective.
> On Thu, Sep 6, 2018 at 10:35 AM Jordan West <jordanrw@xxxxxxxxx> wrote:
> > Thanks for staring this thread Jon!
> > On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad <jon@xxxxxxxxxxxxx>
> > > For 4.0, I'm thinking it would be a good idea to put together a list of
> > the
> > > things that need testing and see if people are willing to help test /
> > break
> > > those things. My goal here is to get as much coverage as possible, and
> > let
> > > folks focus on really hammering on specific things rather than just
> > firing
> > > up a cluster and rubber stamping it. If we're going to be able to
> > > confidently deploy 4.0 quickly after it's release we're going to need a
> > > high attention to detail.
> > >
> > >
> > +1 to a more coordinated effort. I think we could use the Confluence that
> > was set up a little bit ago since it was setup for this purpose, at least
> > for finalized plans and results:
> > https://cwiki.apache.org/confluence/display/CASSANDRA.
> > > In addition to a signup sheet, I think providing some guidance on how
> > QA
> > > each thing that's being tested would go a long way. Throwing "hey
> > > test sstable streaming" over the wall will only get quality feedback
> > > folks that are already heavily involved in the development process. It
> > > would be nice to bring some new faces into the project by providing a
> > > little guidance.
> > >
> > > We could help facilitate this even further by considering the people
> > > signing up to test a particular feature as a team, with seasoned
> > Cassandra
> > > veterans acting as team leads.
> > >
> > +1 to this as well. I am always a fan of folks learning about a
> > subsystem/project through testing. It can be challenging to get folks new
> > to a project excited about testing first but for those that do, or for
> > committers who want to learn another part of the db, its a great way to
> > learn.
> > Another thing we can do here is make sure teams are writing about the
> > testing they are doing and their results. This will help share knowledge
> > about techniques and approaches that others can then apply. This
> > can be shared on the mailing list, a blog post, or in JIRA.
> > Jordan
> > > Any thoughts? I'm happy to take the lead on this.
> > > --
> > > Jon Haddad
> > > http://www.rustyrazorblade.com
> > > twitter: rustyrazorblade
> > >
> Jon Haddad
> twitter: rustyrazorblade