Software development

How to test your automated tests

Topics
QA, Testing, Software

One of the more uncertain things in software development is whether your automated tests actually catch the mistakes they're supposed to.

The short version

The longer version

There's a trinity: the test fixture, test assertions, and business logic.

1. Test fixture

Test fixture is what you set up the universe to be before running a test case. This includes things like what you insert in a database, how you set up mocks to behave, and so on. For this post, I'm going to include the things you give as input to the application in the concept of the fixture.

Any meaningful change* in your fixture must make at least some of your tests fail. If they don't, your tests don't work like they should, or you have meaningless crap in your fixture.

So what if I have meaningless crap in the fixture?

No-one else wants to have anything extra in the codebase. Get rid of it.

2. Test assertions

Test assertions are you telling the computer what the program should output in a test case. This and that must be equal to 200, for example.

Any meaningful change in an assertion must make at least some of your tests fail. If it doesn't, your tests don't work like they should, or you have meaningless crap in your assertions.

Same deal as with the fixture.

3. Business logic

Strip away all the boilerplate and user authentication and whatnot, and you're left with the purpose of the program: the business logic.

Any significant change in the business logic must make at least some of your tests fail. If it doesn't, you need more test coverage. If you think coverage is already good enough, then that particular piece of logic isn't significant, by definition.

The takeaway

Use these three rules to verify your automated tests.