Having testers made my team worse
Full disclaimer — I don’t dislike testers! In fact, I love them! But in this story, I am going to share an experience where we had no functional testers left in my team and the net result of that was that we got better at automation, continuous delivery, DevOps and closer as a team.
A couple of months ago, the last remaining functional tester in my team decided to resign. Initially, the team responded with the expected shock and horror. Visions of impending doom ensued as we had no foreseeable way to test and release our software — right in the middle of a major refactor and feature release! The timing couldn’t have been worse!
And yet, as I write this article today, we are now in a better state than we ever were before when we had testers. We are where we are not because the testers were bad, but because having testers meant that as developers, we could get away with being lazy and not truly putting in the effort to write meaningful tests that run both in our CI and CD pipelines.
As eluded to above, with the loss of all functional testers in my team, we had to find ways as developers to reliably test and deliver our software without impeding our commitments — commitments which all developers had already made. As such, we needed to find an ingenious way to test our software changes that didn’t reduce development capacity for each sprint. The only solution here is automation.
Ian Cooper did a wonderful presentation at DevTernity in 2017 on where TDD went wrong and we took this to heart. We began by adding behavioural tests to our acceptance criteria for any user stories that would run in our CI pipeline to give us quick feedback but also allow us to take a test-first approach to development. We created a suite of tests using Gherkin syntax that were derived based off the business requirements for the tasks at hand and wrote code to meet those specifications. Where logic was complex we continued to add isolated unit tests to help us along the way.
Once we had our behaviour tests running in our CI pipeline giving us quick feedback, we had to look at ways to automate our tests in an integrated environment. There are many schools of thought on the correct way to run integration tests, but to get immediate value, we simply wrote our tests to run against our application directly in our development and testing environments.
We took the existing test cases that had been created by our functional testers over the years and scripted them in our integration test suite. This allowed us to totally eliminate any need for manual testing. Within a matter of days our existing regression test framework had been reworked slightly to use design patterns to eliminate code duplications. This also meant that if there are any features being developed, we know for sure if we are introducing a regression. We could now also update the automation as we develop to get immediate feedback in an integration environment on the code branch we are developing with each commit.
When all the dust had settled and the automation framework and integration tests had been updated and all manual tests had been automated, we found we had no need to do any manual testing aside from during our local development process to sanity check our hypotheses.
With our matured automated testing, we have subsequently removed almost all of the manual intervention steps from our CD pipelines. Our pull request policies have also relaxed significantly as we put more trust in our behaviour tests and we have found a way to release software quicker than we had ever done before — safer, more reliably and more frequently. The prospect of enabling our DevOps practices to free us up even further, enabling low-risk deployments, virtualised integration tests in CI pipelines using containers and so on. All this can be done safely because we have complete faith in our tests. We now trust each other to deliver quality software and expected that every one of us contributes to our tests, improving our product with every feature that is delivered.