Re: Why are large code drops damaging to a community?
I’d say they are a symptom *and* a problem. But putting that aside, can you
unroll what you mean please?
What was that code drop from SGI a symptom of?
What did Robert Thau do (or not do), before during or after to ensure the
success of httpd?
On Sat 20. Oct 2018 at 00:28 Jim Jagielski <jim@xxxxxxxxxxx> wrote:
> I would say that, in general, large code drops are more a *symptom* of a
> problem, rather than a problem, in and of itself...
> > On Oct 19, 2018, at 5:12 PM, Alex Harui <aharui@xxxxxxxxx.INVALID>
> > IMO, the issue isn't about large code drops. Some will be ok.
> > The issue is about significant collaboration off-list about anything,
> not just code.
> > My 2 cents,
> > -Alex
> > On 10/19/18, 1:32 PM, "James Dailey" <jamespdailey@xxxxxxxxx> wrote:
> > +1 on this civil discourse.
> > I would like to offer that sometimes large code drops are unavoidable
> > necessary. Jim's explanation of httpd contribution of type 1 is a
> > example.
> > I think we would find that many projects started with a large code
> > (maybe more than one) - a sufficient amount of code - to get a project
> > started. When projects are young it would be normal and expected for
> > to happen. It quickly gets a community to a "thing" that can be added
> > It obviously depends on the kinds of components, tools, frameworks,
> > that are being developed. Game theory is quite apropos - you need a
> > sufficient incentive for *timely* collaboration, of hanging together.
> > Further, if your "thing" is going to be used directly in market (i.e.
> > very little of a product wrapper ), then there is a strong
> > to share back the latest and greatest. The further from market
> > the easier it is to contribute. Both the Collaboration space and
> > Competitive space are clearly delineated, whereas in a close to market
> > immediacy situation you have too much overlap and therefore a built in
> > delay of code contribution to preserve market competitiveness.
> > So, combining the "sufficient code to attract contribution" metric
> with the
> > market-immediacy metric and you can predict engagement by outside
> > (or their contributors) in a project. In such a situation, it is
> better, in
> > my view, to accept any and all branched code even if it is dev'd
> > This allows for inspection/ code examination and further exploration
> - at a
> > minimum. Accepting on a branch is neither the same as accepting for
> > release, nor merging to master branch.
> > Now, the assumption that the code is better than what the community
> > developed has to be challenged. It could be that the branched code
> > be judged only on the merits of the code (is it better and more
> > or it could be judged on the basis that it "breaks the current build".
> > There can be a culture of a project to accept such code drops with the
> > caveat that if the merges cannot be done by the submitting group,
> then the
> > project will have a resistance to such submissions (you break it, you
> > it), or alternatively that there will be a small group of people that
> > sourced from such delayed-contribution types - that work on doing the
> > merges. The key seems to be to create the incentive to share code
> > others do, to avoid being the one that breaks the build.
> > ~jdailey67
> > On Fri, Oct 19, 2018 at 6:10 AM Jim Jagielski <jim@xxxxxxxxxxx>
> >> Large code drops are almost always damaging, since inherent in that
> >> process is the concept of "throwing the code over a wall". But
> sometimes it
> >> does work out, assuming that continuity and "good intentions" are
> >> To show this, join me in the Wayback Machine as Sherman and I travel to
> >> the year 1995...
> >> This is right around the start of Apache, back when Apache meant the web
> >> server, and at the time, the project was basically what was left of the
> >> NCSA web server plus some patches and bug fixes... Around this time,
> one of
> >> the core group, Robert Thau, started independent work on a
> >> of the server, which he code-named "Shambala". It was basically a single
> >> contributor effort (himself). One day he simply said to the group,
> "Here, I
> >> have this new design and architecture for Apache. It adds a lot of
> >> features." So much of what defines httpd today can find its origin right
> >> there: modular framework, pools, preforking (and, as such, the initial
> >> gleaming towards MPMs), extendable API, etc...
> >> In many ways, this was a large code drop. What made it different is that
> >> there was *support* by the author and the community to work on
> >> it into the whole. It became, basically, a community effort.
> >> Now compare that with a different scenario... Once httpd had picked up
> >> steam, and making sure that it was ported to everyone's favorite *nix
> >> flavor was important, SGI had done work on a set of patches that ported
> >> httpd to their OS and provided these patches (a set of 10 very large
> >> patch-files, iirc) to the group. What was clear in those patches is that
> >> there was no consideration at all on how those patches affected or broke
> >> anyone else. They rewrote huge swaths of code, optimizing for SGI and
> >> totally destroying any sort of portability for anyone else. And when we
> >> responded by, asking for more information, help with chatting with their
> >> developers to try to figure things out, and basically trying to figure
> >> how to use and merge this stuff, SGI was basically just silent. They
> >> it to us and that was the beginning and the end of their involvement as
> >> as they were concerned.
> >> Way, way too many large code drops are the latter. Hardly any are the
> >> former.
> >> 1. I have paraphrased both the Shambala and SGI events
> To unsubscribe, e-mail: dev-unsubscribe@xxxxxxxxxxxxxxxxxxxx
> For additional commands, e-mail: dev-help@xxxxxxxxxxxxxxxxxxxx