> > You only have to look at the past few attempts (scrapped versions)
> > to release apache to see the dangers in rush rush rush attitude.
> I’m assuming it’s a given that httpd should only release when ready
> to. I don’t think any of the release-often advocates are suggesting
> taking less care with releases or releasing untested code. My
> question is would any more testing go on if we did release less
> frequently? On one side I get the argument that more time between
> releases theoretically allows for more testing time, but I question
> whether anyone would use that time for that? My experience of
> software development (outside of httpd) suggests not.
To provide a 0.02$ answer to that question, last night, I went around
and looked for the many amount of tools that can be used to take care
of the continuous integration & continuous delivery, testing (both
performance as well as debugging) and other assorted tools to help
putting out the best releases of software as possible. The list is
included below and include both userspace tools as well as Linux kernel
tools (selftest, memleaks, kasan & ubsan...)
The thing is, some software are famous for API and / or ABI breakage
despite their release cycles (short like 2 weeks to a month, long like
yearly releases). Which software? that is irrelevant.
The few reasons I am working on that at the moment is purely for my own
big@ss selfish benefits in that I was / am working for a long time as a
computer tech, sysadmin, software developer and testing and I became
irritated enough time by quality issues in build-it yourself distros
(gentoo, funtoo, LFS+BLFS...among them, LFS+BLFS is the best quality
one so far) that I want to put up a home server which will do nothing
but testing using the toolset and releasing distributions images
(kernel + userspace selected for various tasks, workstation or server).
Side benefit is that, as soon as breakage happen (say, after a
particular commit or something), I get to know immediately and report;
optionally, provide a patch (which is my intended goal, short to
> Saying that, the releases do take effort and do take time to test, so
> I don’t think httpd is ever going to be a fully automated “DevOps”
> process that is comfortable releasing multiple times per day. As I
> mentioned previous nginx seems to release once a month and that seems
> about right to me but others may have a different opinion. Perhaps it
> would be good to clarify that to make sure we are all on the same
> understanding as to what to goal here is?
Look at it this way. It is popular in software company to use scrum
methodology and cut a release (internal or external) every two weeks,
or, on a monthly basis with frequent new features but it can and does
happen that a particular scrum cycle can be used for refactoring / bug
fixes purpose with no new features added (I can't provide any more
details than that).
In many of those companies (those existing since a long time), it took
some massive effort to migrate to a scrum based methodology from a
waterfall methodology. It is my impression (and as always, I could be
wrong), a similar pain look like it happen here.
> Nonetheless scrapping a release is seen as a bad thing, as your
> comment suggests so perhaps that needs to be addressed one way or
I won't comment on scrapping a release (number) but I do hope that the
system I'm working on will prove useful when it is done and I can make
My constraints is that I can't provide any deadline for it given that
my medical care now right now is tying up between 15 and 25 hours per
weeks of work (medical consult, homework, helping with the number of
differential diagnoses and some medical literature review to offload my
medical team) for the next few years.
> In summary I can understand why you got the stats you did, but I
> still believe there are compelling reasons to release more frequently
> (within reason).
ultimately, I think frequent releases will prove useful but before,
there is some pain needing to be overcome first.