Re: Why are large code drops damaging to a community?
I would say that, in general, large code drops are more a *symptom* of a problem, rather than a problem, in and of itself...
> On Oct 19, 2018, at 5:12 PM, Alex Harui <aharui@xxxxxxxxx.INVALID> wrote:
> IMO, the issue isn't about large code drops. Some will be ok.
> The issue is about significant collaboration off-list about anything, not just code.
> My 2 cents,
> On 10/19/18, 1:32 PM, "James Dailey" <jamespdailey@xxxxxxxxx> wrote:
> +1 on this civil discourse.
> I would like to offer that sometimes large code drops are unavoidable and
> necessary. Jim's explanation of httpd contribution of type 1 is a good
> I think we would find that many projects started with a large code drop
> (maybe more than one) - a sufficient amount of code - to get a project
> started. When projects are young it would be normal and expected for this
> to happen. It quickly gets a community to a "thing" that can be added to.
> It obviously depends on the kinds of components, tools, frameworks, etc
> that are being developed. Game theory is quite apropos - you need a
> sufficient incentive for *timely* collaboration, of hanging together.
> Further, if your "thing" is going to be used directly in market (i.e. with
> very little of a product wrapper ), then there is a strong *disincentive*
> to share back the latest and greatest. The further from market immediacy
> the easier it is to contribute. Both the Collaboration space and
> Competitive space are clearly delineated, whereas in a close to market
> immediacy situation you have too much overlap and therefore a built in
> delay of code contribution to preserve market competitiveness.
> So, combining the "sufficient code to attract contribution" metric with the
> market-immediacy metric and you can predict engagement by outside vendors
> (or their contributors) in a project. In such a situation, it is better, in
> my view, to accept any and all branched code even if it is dev'd off-list.
> This allows for inspection/ code examination and further exploration - at a
> minimum. Accepting on a branch is neither the same as accepting for
> release, nor merging to master branch.
> Now, the assumption that the code is better than what the community has
> developed has to be challenged. It could be that the branched code should
> be judged only on the merits of the code (is it better and more complete),
> or it could be judged on the basis that it "breaks the current build".
> There can be a culture of a project to accept such code drops with the
> caveat that if the merges cannot be done by the submitting group, then the
> project will have a resistance to such submissions (you break it, you fix
> it), or alternatively that there will be a small group of people that are
> sourced from such delayed-contribution types - that work on doing the
> merges. The key seems to be to create the incentive to share code before
> others do, to avoid being the one that breaks the build.
> On Fri, Oct 19, 2018 at 6:10 AM Jim Jagielski <jim@xxxxxxxxxxx> wrote:
>> Large code drops are almost always damaging, since inherent in that
>> process is the concept of "throwing the code over a wall". But sometimes it
>> does work out, assuming that continuity and "good intentions" are followed.
>> To show this, join me in the Wayback Machine as Sherman and I travel to
>> the year 1995...
>> This is right around the start of Apache, back when Apache meant the web
>> server, and at the time, the project was basically what was left of the
>> NCSA web server plus some patches and bug fixes... Around this time, one of
>> the core group, Robert Thau, started independent work on a re-architecture
>> of the server, which he code-named "Shambala". It was basically a single
>> contributor effort (himself). One day he simply said to the group, "Here, I
>> have this new design and architecture for Apache. It adds a lot of
>> features." So much of what defines httpd today can find its origin right
>> there: modular framework, pools, preforking (and, as such, the initial
>> gleaming towards MPMs), extendable API, etc...
>> In many ways, this was a large code drop. What made it different is that
>> there was *support* by the author and the community to work on integrating
>> it into the whole. It became, basically, a community effort.
>> Now compare that with a different scenario... Once httpd had picked up
>> steam, and making sure that it was ported to everyone's favorite *nix
>> flavor was important, SGI had done work on a set of patches that ported
>> httpd to their OS and provided these patches (a set of 10 very large
>> patch-files, iirc) to the group. What was clear in those patches is that
>> there was no consideration at all on how those patches affected or broke
>> anyone else. They rewrote huge swaths of code, optimizing for SGI and
>> totally destroying any sort of portability for anyone else. And when we
>> responded by, asking for more information, help with chatting with their
>> developers to try to figure things out, and basically trying to figure out
>> how to use and merge this stuff, SGI was basically just silent. They sent
>> it to us and that was the beginning and the end of their involvement as far
>> as they were concerned.
>> Way, way too many large code drops are the latter. Hardly any are the
>> 1. I have paraphrased both the Shambala and SGI events