All Articles

Focus on Test Quality

Engineering teams benefit from focusing on the quality of automated test suites instead of prioritizing clean implementation code. By building and maintaining an excellent test suite, we:

Optimize for what users care about

In general, our users want us to:

  1. Minimize the amount of bugs in our applications
  2. Get new features out the door as quickly as possible

By rigorously enforcing quality in test suites and de-prioritizing quality standards in implementation code we align our efforts with addressing both of these user concerns.

  • By focusing on high-value tests and effective test suites, we prioritize shipping a high-quality product over high-quality code
  • By de-prioritizing standards in implementation code (while still maintaining a quality bar), it allows us to ship more rapidly

Avoid premature optimization

In my experience, finding product-market fit for new features (even when working in relatively well-known domains) can take a while. It’s difficult to predict ahead of time which parts of the application will be scrapped, which leads to entire sections of code being rewritten (sometimes multiple times). It’s more efficient to delay investing heavily in clean implementations until we know the code-base is relatively stable.

Conversely, some parts of the application might change extremely infrequently. It’s not effective to invest heavily in clean implementation code for a subsystem that will be low-maintenance. Clean code reduces our maintenance costs; but, maintenance costs for stable code-bases is effectively zero anyways.

We can refactor, clean up, and optimize our implementations as needed.

Strategy

Let’s explore a strategy for building and maintaining an effective test suite by introducing clean interfaces and focusing on high-value tests.

Clean interfaces

When designing a technical solution, we should focus on creating clean interfaces for each of our subsystems. These interfaces should expose relevant, context-aware functions needed by consumers of our domain.

This allows our callers (both real consumers and our test suite(s)) to depend solely on interfaces. We should also include on our interface any functions that are required for our test setup. This allows our tests to be opaque (by opaque we mean our tests do not rely on implementation details). By doing this, we have the freedom to change our underlying implementation independently of our consumers.

Likewise, by depending on domain-specific functions our consumers can operate at a higher abstraction and not get bogged down in implementation details. This more clearly communicates the intent of our code as well.

An added benefit of requiring all dependencies to be interfaces is that it encourages up-front planning of our architecture and various subsystems. By designing our interfaces prior to developing the implementation, we require an explicit design for which operations we need to support. In my experience, this leads to cleaner implementations as well!

High-value tests

I believe two key criteria for writing high-value tests are prioritizing opaqueness and mimicking likely calling behavior.

Enforce opaqueness

Our automated test suite should be decoupled from our underlying implementation code as much as possible. We can enforce this by only testing against interfaces, which allows our tests to be opaque and ignorant of the underlying implementation(s).

For example, if we need to create a test user as part of our test suites setup, we might be tempted to have our initialization code add a row directly in the test database (that corresponds to creating our test user).

However, this relies on our persistence layer’s implementation details. If we decided to change our database schema, we would have to change our test suite as well, reducing confidence in our ability to detect regressions. Instead, we should add a function to our interface to create a user and invoke that as part of our test setup.

Likewise, by enforcing opaque tests, we guarantee we are actually testing business logic instead of testing against how the implementation is written (using mocks for non-external dependencies is a smell for this).

Mimic likely calling behavior

When writing our tests, we should focus on covering likely calling behavior of our consumers. This allows us to focus our efforts on hardening against bugs our users are likely to uncover.

Similarly, when investigating a bug, we should first start by reproducing the error with an opaque test that fails. When the test passes, we know we have addressed the issue and now have sufficient protection against possible regressions.

Conclusion

By focusing on building an effective, high-quality test suite we avoid premature optimization and target our efforts towards addressing what our users actually care about. Enforcing clean interfaces and high-value tests encourage good design and more effective software development.