100% code coverage? Easy!
“Code coverage” measures the amount of production code (usually as a percentage) that is executed when running the test suite. It is often used as a proxy metric for assessing the overall test quality of a code base. Code analysis tools like SonarQube are able to, for example, fail the build pipeline if the code coverage metric falls below a certain threshold.
There is however a subtle shortcoming of this metric that too few people seem to be aware of. As I said it measures the amount of production code that is executed when running the test suite. It does not take into account whether the test does anything meaningful with that.
Let’s create a small example to illustrate the issue. Take this straightforward FizzBuzz implementation and an accompanying test that tests nothing:
// fizz-buzz.ts
export const fizzBuzz = (n: number): string => {
const fizz = n % 3 ? "Fizz" : "";
const buzz = n % 5 ? "Buzz" : "";
return fizz + buzz || String(n);
};
// fizz-buzz.test.ts
describe("fizzBuzz", () => {
it("implements Fizz Buzz", () => {
expect(1).toEqual(1);
});
});
Running the test (e.g. running vitest
with the --coverage
flag) results in a code coverage of 0% – no surprise there. But what happens if we call the fizzBuzz
function inside our test?
// fizz-buzz.test.ts
import { fizzBuzz } from "./fizz-buzz.ts"
describe("fizzBuzz", () => {
it("implements Fizz Buzz", () => {
// call to `fizzBuzz` without doing anything with its result
fizzBuzz(1);
expect(1).toEqual(1);
});
});
Now we measure a code coverage of 100% of lines and 66.66% of branches:
--------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
--------------|---------|----------|---------|---------|-------------------
All files | 100 | 66.66 | 100 | 100 |
fizz-buzz.ts | 100 | 66.66 | 100 | 100 | 2-3
--------------|---------|----------|---------|---------|-------------------
We can take this further by adding more calls to our test that cover all the different cases:
// fizz-buzz.test.ts
import { fizzBuzz } from "./fizz-buzz.ts"
describe("fizzBuzz", () => {
it("implements Fizz Buzz", () => {
fizzBuzz(1);
fizzBuzz(3);
fizzBuzz(5);
fizzBuzz(15);
expect(1).toEqual(1);
});
});
Now our code coverage passes with flying colors:
--------------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
--------------|---------|----------|---------|---------|-------------------
All files | 100 | 100 | 100 | 100 |
fizz-buzz.ts | 100 | 100 | 100 | 100 |
--------------|---------|----------|---------|---------|-------------------
The point I’m trying to make is that you need to be mindful when it comes to code coverage as a quality metric. Be aware that a development team needs to care about code and test quality. Measuring code coverage and making it visible is a sensible thing to do but requiring a minimum coverage value might do more harm than good if a team has no stakes or interest in actual code quality.