Thousand Cuts

They say one’s perception is one’s reality. While this saying has devolved into the pantheon of platitudes, its relevance to quality is as strong as ever. All too often we in the software industry quantify quality through an objective lens. On the surface, being objective about the quality of those solutions we work so diligently to deliver is a great approach. Those things that are easily measured are easy to report upon as well. So, we tend to focus on performance metrics, defect counts, net promoter scores, call volumes and the like. They are quantifiable. They are inarguable. They are… objective. But how do users perceive quality?

Let’s consider the dreaded escape defect.

I’m torn. Torn between my natural proclivity toward idealism and the unwavering real world need for pragmatism. While the idealist in me wishes for a day when everyone has flawless defect removal efficiency (DRE) metrics, the pragmatist in me knows that not all bugs are created equal. High severity defects can have significant and quite real impacts. Revenue loss. Identity theft. Critical system outages. Generally these defects warrant immediate and decisive action… and for our purposes here are boring to talk about. Lower severity defects, however, capture my interest. These little guys don’t always get the attention they so richly deserve. “Ok. Ok. So there is a typographical error buried somewhere deep in the help tree content. Yeah it’s sub-optimal but it’s not preventing users from leveraging major functionality. We’re not losing sales or ad dollars every moment it goes unresolved.” Agreed. You can even justifiably argue that the objective contemporaneous effects of such a defect may be limited to minor annoyance or hearty chuckle around the proverbial water cooler. So what’s the big deal?

Perception. Perception is the big deal.

So, what is the difference between objective and perceived quality and why should we care? The most straightforward explanation is that objective quality is a measurable reflection of the current state of a product. While perceived quality is how users feel about said product. Often, what users perceive about the quality of your solution is not a direct reflection of the actual quality. Perceptions are fueled by experience, brand strength, customer loyalty, the juxtaposition of your product with other similar products. These indirect influencers often give users a hyper-awareness about quality, which can amplify the perceived impact of what may be otherwise considered a low priority issue.

The relationship between the objective view of quality and consumer perceptions has both short and long-term implications.

A significant factor feeding one’s perception of quality is that individual’s propensity to incorporate all sources of relevant information when formulating an opinion on the quality of a product. Ultimately, an individual’s perception of quality is impacted in a negative and exponential fashion over time as objective quality degrades. Additionally, a lag effect can occur in which changes in quality don’t necessarily have an immediate impact on the perception of quality. Simply put, one’s negative perception increases exponentially as the number of bugs goes up and the longer those bugs go without being fixed.

Worse yet, the effects of objective quality on perceived quality are asymmetric. Degradations in quality have larger short and long-term effects than increases in quality. Not only is the impact great, but the rate at which the changes are perceived is greatly accelerated for degradation. In other words, consumers will more immediately notice when something is broken reinforcing their perception of poor quality, but when improvements to quality are made, they have far less impact on improving user perception than quality problems have in eroding that perception.

So far I’ve mostly just touched on the short-term impacts of objective quality on perception. The outlook continues on its downward spiral once we get into the long-term effects.

The implication of the long-term effects of objective quality degradation needs to be recognized at an organizational level. Consumers don’t simply forget how they feel about a product or company. They don’t set a reminder on their calendar to reevaluate their position. The period of time in which a consumer reevaluates their perception of quality of a product varies based on several factors. These include, but are not limited to the consumer’s level of knowledge, the cognitive effort the individual is willing and/or able to commit to the thought process, the impact the product has on their personal and/or professional lives, purchase frequency, price, experience, availability of viable competitor products. Why does this matter? …Expectations.

An individual’s perception of quality is shaped by their expectation of quality. Having a high expectation of quality, whether that be based on experience, comparison, brand association, etc., and a low tolerance for quality problems is directly reflected in their perceived quality. The most impactful carryover effect of objective quality degradation is the lowering of expectations of quality. The scope of which can be directed solely at the product or feature displaying the quality issue(s) or as wide as the company, brand, or even industry on whole. The long-term effects of lowered expectations can last on average 5-7 years or even indefinitely. Once confidence has been lost it can be a difficult and lengthy process to regain.

Now if we take all this into consideration when evaluating the impact of those seemingly trivial low priority escape defects… well, maybe they’re not so trivial after all. Let’s imagine that we choose not to ever fix those trivial defects. Whether the decision is explicit or implicit doesn’t matter. Then as an added bonus let’s apply all of this to a long-lived application. Those minor annoyances accumulate. Users become hyper-sensitive to the smallest of issues, compounding the problem. The consumer sentiment goes social, turning into reputational contagions damaging your brand. We’ve now eroded customer perception of quality to a possibly irrevocable level. When you think about it, there is yet another common platitude that bears relevance… death by a thousand cuts.

Overwhelmed yet? Don’t be. Awareness is just the first step.

Let’s talk strategy.

In true full-circle fashion, let’s take another look at objective quality measures. For the most part they have one thing in common. They are lagging indicators of quality. Escape defects…too late. Performance monitoring… too late. Net promoter score… too late. Call volume… really too late. When we measure quality objectively we primarily focus on detection.

As a guiding principal, favor prevention over detection. So, what are those activities that help to prevent quality problems?

Design and Build In Quality
This means being proactive with your role as a quality professional. Working with your team to ensure the rigorous use of SOLID principles like loose coupling and high cohesion is essential.  By leveraging those practices that limit the “blast radius” of changes to the app you decrease risk, reduce the testing footprint and increase the ability to deliver rapidly.

I recommend you invest in your delivery tool chain to enable tempered releases as well. By exposing a limited volume of users any new changes to your solution “blast radius” takes on another connotation. Better to expose 1-2% of the potentially impacted user base than 100%. This typically makes it far easier to quickly recover from quality issues as well.

Measure the Right Things
Mix in leading indicators along with the typical lagging indicators. Just because Defect Leakage is a “too little too late” metric doesn’t mean we don’t need to know what our leakage landscape looks like. Continue to collect those lagging metrics as a real time measure of how the objective quality is doing. But if you are not already tracking leading indicators, you need to be.

Defect Removal Efficiency (DRE) – If you don’t measure any other leading indicators, measure this one! This deceptively simple metric can expose those nasty quality degradation trends that erode consumer confidence. Simply, it lets you know if you are fixing more bugs than are being introduced into the system. A good target is around 85% DRE. Anything below 85% DRE is an indicator that you aren’t maintaining a sustainable fix rate where bugs have long (or even indefinite) lifespans. Left to linger, defects spell impending doom for your product as consumer perception of quality deteriorates.

Cyclomatic Complexity – Often associated with code coverage, analyzing the complexity of a system under test (SUT) can be leveraged as a predictive measure. The more complex the SUT the more likely it is to break. More importantly the higher the degree of complexity the more likely that your application will break in unexpected ways (large blast radius).

Defect Mining – Hopefully you’re already using a defect tracking tool to manage all those perception eroding bugs. If not, you should be. If so, you should be leveraging the gold mine of latent predictive data waiting for you within. Imagine generating heat maps, word clouds or trend graphs which give you visibility into functionality or components of your solutions that have been problematic over time. Perhaps even leveraging machine learning to search for patterns which could predict who, when, where, how and likely blast radius of the next defect.

Use Agile Test Methodologies
Acceptance Test Driven Development (ATDD) – This powerful approach to building software came out of the Lean Agile movement. The tool itself emphasizes the need to understand what your building before you build it. A novel idea, no? As a quality professional, the forcing mechanism of an upfront conversation provides a great opportunity to design in quality.

Swiss Cheese Model – Monolithic regression is prone to instability, maintainability and speed problems. Whether tending to a legacy project or building some new hotness from the ground up, this model works. Since this topic deserves its own post let’s keep the explanation simple. Rather than execute every test case to validate every release candidate doesn’t have unexpected impact (again with the blast radius), slice your automation into thin feature based segments. Once sliced up, run those segments continuously. This will allow every small change to be quickly verified without tying up your build process with long running regression suites.

Continuous Delivery – Small frequent changes and a delivery tool chain that allows for the rapid deployment of customer benefit but also bug fixes. Yes, we need to be comfortable with the idea that defective software will make its way to consumers. What we shouldn’t be comfortable with is long release cycle times hindering quick time to resolution (TTR) for those bugs that escape.

Ideally, every quality problem would be prevented from ever being created. As much as the aforementioned techniques can aid in prevention, back here in the real world, there will always be escape defects. When the inevitable happens and a bug escapes the rigorous delivery chain you’ve so lovingly crafted, a small blast radius and the ability to roll out fixes rapidly can change the paradigm from perception disaster to hardly noticeable blip on the proverbial quality radar.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.