Best Practices for Automated Pipelines

1. Produce best practices document for GitHub Workflow Actions. The GitHub docs are extensive and high quality but a little too much to read in a few minutes.
    1. How to do a release (automated way)
    2. Publishing artifacts signed/unsigned
    3. Try to eliminate inconsistencies
    4. How to configure GitHub Pull Requests to run CI
    5. Mention core principles: reproducible builds
3. Perform a general check-up of existing CI workflows for all graduated & incubated projects where the execution times are high.
4. Evaluate possibilities for a self-hosted, open source alternative for BuildJet that could be powered by cheap AWS spot instances or other cloud providers with competitive pricing (such as Hetzner).
5. Stephen Curran (original suggestor of the task force)
6. Marcus Brandenburger
    1. Best practice document should also contain a mapping of how the individual project performs their builds locally, e.g. using something like https://github.com/nektos/act\
7. Stephen Curran: Checklist for good things a project has
    1. Linting
    2. Unit tests
    3. Integration tests


8. Arun S M — Today at 7:56 AM
It is also possible to run resource exhaustive CI checks on GitHub but on personal forks.
PR reviewers can request for this log in their review process

9. From Stephen via Discord TOC chat

  1. FYI -- my brainstorming on CI/CD best practices. None of these are earth shattering but it took a long time to get these in place -- often because we didn't think of them, or there were not good examples to follow that fit our project. Even today, the different Aries sub-projects do these to different levels. My thought is that by providing a list, and pointing to repos on a per language basis to show what's been done, it will be easier for other projects to pick up these best practices.

  2. Ideas: GitHub Practices

    • Protect the main branch, turn on DCO Protection
    • Require reviews on merges
    • Use GitHub setting to cancel test runs when irrelevant -- e.g. PR tests when a merge is done in the middle.
    Use GHA - CI Pipelines
    • Create a test pipeline executed on PRs before publishing
    • Include unit tests, integration test, linting/code style, static analysis and coverage reporting
    • Document how to run tests locally, run individual failing tests, and how to add tests of all types.
    • Where appropriate, implement pre-commit rules.
    • Try to circumvent test runs when updates are to documentation
    Release - CD Pipelines
    • Document a release process and automate as much as possible.
    • Create a changelog that is useful to developers, deployers and solicit feedback on the usefulness of the documentation. A simple PR list is generally not enough.
    • Define a release pipeline triggered when a release is tagged.
    • Publish packages to well-known places
    • Publish container images to ghcr
    • Publish a Development Release that reflects main
    • Create, Maintain and Publish per release documentation -- Documentation as a possible part of every PR


  3. Another nice to have is to have infrastructure as code reference implementations to help people use Hyperledger projects in their environment -- e.g. encourage the creation of docker compose setups for local development, the use of devcontainers, the creation of Helm Charts for deploying components.


Tasks

Links/Reading List

Chat Log

LinkDescription
https://github.com/rhysd/actionlintLinter for workflow files - saves time when developing new workflow yaml files. Also has security checks built in, something that we can never have too much of!
https://github.com/apps/socket-securityAttempts to combat supply chain related attacks via GitHub Actions (malicious pull requests) among other things.

---

Dave Enyeart — Today at 11:16 AM
My feeling is to leave code coverage decisions to the projects. Especially when dictated from above, I've seen projects with high coverage metrics spend too much time on low-value tests trying to hit the goal, while not spending enough time on other important integration/system/user tests.
Project maintainers are in the best position to decide where to invest their test time and how much weight code coverage should carry 

---

Ry Jones — Today at 11:44 AM
@Dave Enyeart completely agree

---

Peter Somogyvari — Today at 11:46 AM
@Ramakrishna I agree with @Dave Enyeart 
The way I like to put it (which is the same as David's comment above just from a different angle) =>
Important/safety critical could should have 100% coverage, the rest of it just gets however much it gets. 

---

For example, I consider catch blocks important by default because the quality of software is hugely dependent on how does it handle failure scenarios ("How does it break?") BUT funnily enough, during my code reviews, these are the codepaths that are usually covered the LEAST because they are off the happy path and therefore harder to simulate. Often times when writing test coverage for the catch blocks, I find myself discovering issues with the error handling logic even before running the tests that are supposed to uncover these.

---

Ramakrishna — Today at 11:57 AM
I agree, Dave. The metric doesn't even have to be very high. In a previous job, all devs in my team were asked to hit 60% code coverage. The build pipeline would actually stall if the test reported even, say 59%. But most of these tests, based on my inspection, covered low-hanging fruit and ended up missing some serious bugs that were discovered later.

---

Ramakrishna — Today at 11:59 AM
Test-Driven Development (TDD) is the answer!  Requires lot of discipline though.

---

swcurran — Today at 4:20 PM
This is the type of thing I meant about the "Checklist".  The TOC / Best Practices should say that the project SHOULD use a Code Coverage tool, menition pros and cons ("Agree as a project on a target test coverage percentage"), and should point to tools, and deployments of tools that are used in some repos.  How you implement code coverage will likely vary based on the language/tech stack.


---

2023-07-20

Dave: dedicated runners are great - Ask Ry about the name of the provider.

Refer back to the best practices document as well (Dave)

List of plain points and experiences - Marcus