Posted on November 30, 2014
Some patterns pop up again and again. On many, actually most, of the products that I’ve managed, I’ve had to spend huge amounts of time going through the product, finding what’s broken, filing issues, tracking issues, chasing down engineers, apologizing to stakeholders and clients, and then following-up and re-testing to make sure that the issues were resolved… only to find that the same issues pop back up a week or two later, and force me to relive the same nightmare over again...
It’s common to refer to this process as User Acceptance Testing (UAT). Let me make it clear that this is a grotesque violation of the acronym and meaning of “UAT.” If you’re working on a small team and doing Agile development, you may very well NOT have a dedicated (or even part-time) QA person/team. Moreover, unit and feature tests, and code reviews oftentimes do not catch bugs and issues that can easily be seen in the live application. An obvious example, and one that I’ve seen countless times in the past, is when two engineers are working on different features and they both make conflicting changes to the CSS or JS that breaks something in the UI or specific interaction elements.
Preventable issues like these that get past the engineering team to be discovered by the Product Manager add unnecessary overhead and operational inefficiency to the team as a whole. Reason being, as a PM, I now have to create an issue in JIRA, prioritize/schedule it, assign it, track it, and retest it. Moreover, the engineering team has to create a new branch, go back into their code (after it’s become a bit less familiar), fix it, and redeploy it in another sprint. Even if you choose not to attach Article points to bugs, which I believe is a worst practice, it will impact your velocity; it must, because it detracts from the team’s ability to work on value-added features. Bottom line, there is a huge yet unrecognized cost inherent in this process.
How do we do it better? Ninety percent of the issues that I find when doing “UAT” could have been easily found by the engineers themselves. In Agile, the onus of developing “shippable” features is on the Engineering team. They should be responsible for, at very least, doing a sanity check on the feature that they worked on and any other existing features likely to be impacted by their code. DevOps process improvements can also be helpful. Oftentimes, code conflicts don’t exist on a developer’s local machine because they haven’t merged upstream changes into their feature branch. A best practice is to ensure that this occurs prior to the engineer merging their code (or pull request if CI is setup to auto-deploy to Dev).
The bottom line is that it’s crucial to establish a culture of ownership and accountability in your Engineering team. Code commits shouldn’t just pass the tests that the engineer wrote (assuming TDD) -that’s not enough -the end-user doesn’t give a damn about tests; they care about whether or not the product/feature works as it’s intended and provides the value that they expect.