[stackalytics] Reported numbers seem inaccurate
Julia Kreger wrote:
> Is it time for us, as a community, to whip up something that helps
> provide basic insight?
I think we should first agree on what we actually need.
A first level would be to extract the data about changes and reviews
from Gerrit into a query-able system so that we don't have everyone
hammering Gerrit with individual stats queries. Then people can share
their "insights scripts" and run them on the same official data.
A second level would be to come up with some common definition of "basic
insight" and produce that data for all teams. Personally I think the
first level would already give us a lot more confidence and consistency
in the numbers we produce.
As an aside, the OSF has been driving a proof-of-concept experiment to
use Bitergia tooling (now ELK-based) for Kata Containers and StarlingX,
which we could extend to OpenStack and all other OSF projects if successful.
Historically we dropped the old Bitergia tooling because it was falling
short with OpenStack complexity (groups of repositories per project
team) and release-timeframe data, and its visualization capabilities
were limited. But the new version is much better-looking and flexible,
so it might be a solution in the long term.
Thierry Carrez (ttx)