You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

1. Produce best practices document for GitHub Workflow Actions. The GitHub docs are extensive and high quality but a little too much to read in a few minutes.
    1. How to do a release (automated way)
    2. Publishing artifacts signed/unsigned
    3. Try to eliminate inconsistencies
    4. How to configure GitHub Pull Requests to run CI
    5. Mention core principles: reproducible builds
3. Perform a general check-up of existing CI workflows for all graduated & incubated projects where the execution times are high.
4. Evaluate possibilities for a self-hosted, open source alternative for BuildJet that could be powered by cheap AWS spot instances or other cloud providers with competitive pricing (such as Hetzner).
5. Stephen Curran (original suggestor of the task force)
6. Marcus Brandenburger
    1. Best practice document should also contain a mapping of how the individual project performs their builds locally, e.g. using something like https://github.com/nektos/act\
7. Stephen Curran: Checklist for good things a project has
    1. Linting
    2. Unit tests
    3. Integration tests


8. Arun S M — Today at 7:56 AM
It is also possible to run resource exhaustive CI checks on GitHub but on personal forks.
PR reviewers can request for this log in their review process

## Chat Log

---

Dave Enyeart — Today at 11:16 AM
My feeling is to leave code coverage decisions to the projects. Especially when dictated from above, I've seen projects with high coverage metrics spend too much time on low-value tests trying to hit the goal, while not spending enough time on other important integration/system/user tests.
Project maintainers are in the best position to decide where to invest their test time and how much weight code coverage should carry 

---

Ry Jones — Today at 11:44 AM
@Dave Enyeart completely agree

---

Peter Somogyvari — Today at 11:46 AM
@Ramakrishna I agree with @Dave Enyeart 
The way I like to put it (which is the same as David's comment above just from a different angle) =>
Important/safety critical could should have 100% coverage, the rest of it just gets however much it gets. 

---

For example, I consider catch blocks important by default because the quality of software is hugely dependent on how does it handle failure scenarios ("How does it break?") BUT funnily enough, during my code reviews, these are the codepaths that are usually covered the LEAST because they are off the happy path and therefore harder to simulate. Often times when writing test coverage for the catch blocks, I find myself discovering issues with the error handling logic even before running the tests that are supposed to uncover these.

---

Ramakrishna — Today at 11:57 AM
I agree, Dave. The metric doesn't even have to be very high. In a previous job, all devs in my team were asked to hit 60% code coverage. The build pipeline would actually stall if the test reported even, say 59%. But most of these tests, based on my inspection, covered low-hanging fruit and ended up missing some serious bugs that were discovered later.

---

Ramakrishna — Today at 11:59 AM
Test-Driven Development (TDD) is the answer!  Requires lot of discipline though.

---

swcurran — Today at 4:20 PM
This is the type of thing I meant about the "Checklist".  The TOC / Best Practices should say that the project SHOULD use a Code Coverage tool, menition pros and cons ("Agree as a project on a target test coverage percentage"), and should point to tools, and deployments of tools that are used in some repos.  How you implement code coverage will likely vary based on the language/tech stack.


---

2023-07-20

Dave: dedicated runners are great - Ask Ry about the name of the provider.

Refer back to the best practices document as well (Dave)

List of plain points and experiences - Marcus

  • No labels