Hey team,
question about your thoughts on testing. pyrolite was submitted as a pre-submission inquiry and i think it has great potential for pyopensci. it does have testing https://pyrolite.readthedocs.io/en/latest/contributing.html setup BUT there is nothing like code cove setup to clearly display coverage and such.
I’m just curious what your thoughts are on this and what pyopensci should recommend. i know that a % coverage is not always the best metric but at the same time some sort of visual representation of code coverage is helpful to understand the extent of the testing framework. Thoughts?
I would personally recommend an automated code coverage setup, but not put any restrictions on actual percentage covered.
Of course some software is harder to get a higher coverage for than others, and a percentage doesn’t necessarily mean all that much on it’s own. But having a badge with a percentage that points to an automated tool has some clear advantages to me:
It shows thought and effort. Having such a badge communicates to the outside world that the authors have put some time and thought into providing tests for the software to make sure that core functionality is actually provided.
The actual percentage can be an easy opening point for a discussion on how well the software is tested or how hard it is to test in the first place.
When reviewing how well-tested a package is, it is a lower barrier, easier and just plain nicer to be able to view this, pre-measured, on a website such as codecov.io / coveralls.io.
It makes it more likely for other contributors to add extra tests (we all like to see numbers go up… )
One thing I like to set up (with codecov.io, but probably other services can do it too) is “patch coverage” instead of “project coverage” requirements. See this sourmash PR for an example. Overall project coverage was 89.38%, and it was the target for the “patch coverage”: the changes proposed for the new PR should have at least the same coverage, preferably more. This helps having new contributions driving the coverage up.
so we could strongly suggest a tool like code cov w an associated badge to those who submit?
@luizirber when you say patch coverage requirements are you referring to PR’s and such that are compared to a default standard goal overall coverage? that pr looks very similar to what we have setup w/ code cove on our tools and it just happens by default when cod cov is activated (i think??)
it would be cool to be able to describe how to set this up if it is something different from a default code cov setup!!
I like code cov “project” for overall status and badges and “patch” for PR checks.
It’s not without weirdness sometimes, and it can get confused sometimes with force pushes and PR merge branches but it’s still good and useful.
It’s also important to not get too distracted by the coverage percentage. It can be easy to have full test coverage without having tests checking things that are actually important. That said, it is a great guide for what to needs more testing a package.