+1 to adding a doc for this, along with some PR conventions about when to use these. Some questions that I would love to see documented:
* What tests are run automatically and when (pre-commits, all languages, on every commit)
* What other test suites exist, and how should they be used (post-commits and performance tests run on a schedule and validate already-merged code; if a merge breaks post-commits we should prefer rollback over rollforward to keep master in a healthy state; if you think your change could affect one of these suites you can invoke them manually on your PR)
* How do reviewers use Jenkins signals (i.e. do reviewers expect pre-commits to be green before reviewing PR's?)
* What should you do if you think a flaky test caused a failure on your PR? (re-run tests using magical Jenkins incantation)
I also like the idea of automatically scraping the string from the source-of-truth rather than manually maintaining. However, I suspect it might be more trouble than it's worth (the groovy files are in a different git repo than beam-site; the site is currently built from simple markdown files which get compiled to HTML and checked-in). If sourcing the strings from the source proves difficult, I'm in favor of going the simple route and manually maintaining the list; I think there's sufficient value to justify the maintenance cost of.