On the cost of software quality, by Ger Cloudt, author of “What is Software Quality?” | Quality Management Coach & Lecturer @ TU/e | Speaker about Software Quality
The topic discussed below, together with many other aspects of software quality, are discussed in Ger Cloudt’s book “What is Software Quality?”. You can order your own copy on his website, here.
In 1979 Philip Crosby published a book titled “Quality Is Free”, in which Crosby believes that establishing good quality principles will result in savings that will be greater than the investment in quality made. Despite that I agree with this belief in general, I would like to bring some nuances in the belief that “quality is free”. To do so we need to understand the cost of software quality better and realize that the costs of quality consist of two components, being, “costs of implementing quality” and “costs of rectifying poor quality”.
The cost of implementing quality are costs made to achieve a certain level of quality. In software development, typically, these costs are made during the development process itself and might consist out of the amount of testing performed, the amount of reviewing performed, costs associated with having the best craftsmen on the team, the amount of analyzing the requirements, costs of tooling to analyze your software and so on. One could summarize these costs as money spent during development to achieve a certain level of quality.
Costs of rectifying poor quality are costs associated to the product available in the market already. These costs are costs associated with damage, caused by poor quality. Typically, there are costs made during maintenance of the product to rectify failures, costs associated with field-returns but also costs caused by reputational damage and/or liabilities. One could summarize these costs as money spent or lost after the product is delivered to the market.
The total cost of software quality is the sum of “cost of implementing quality” and “cost of rectifying poor quality”. Also, one should understand the inverse relationship between both types of costs, meaning that if no investments are made for achieving quality (low cost of implementing quality) the costs for rectifying poor quality will be high and vice versa as shown in the figure to the right.
If you understand both types of costs of software quality and their inverse relation to each other, one might conclude to strive for the lowest point in the graph of the Total Cost of Quality, indicated in the figure as the sweet spot “C”. From a financial point of view a completely logical conclusion. However, it is not that simple: imagine you are developing safety-critical software in which an error can lead to injury or even death. What sweet spot would you strive for? Possible costs of such an error might be so high that you like to minimize the risk as much as possible, choosing to select your sweet spot “B” or even “A”.
Considering quality types as defined in the 1+3 Software Quality Model (1+3 SQM), one might need to define multiple sweet spots in their Total Costs of Software Quality.
Product Quality is defined as the quality as perceived by the customer or user of the software product. Determining your sweet spot for Product quality is defined by the application of the product and the domain it is applied in. As mentioned earlier, if you are developing a safety-critical application you might want to define your sweet spot at position “B” or even “A”. This implies you would need to invest in implementing Product Quality e.g. extensive hazard analysis, extensive requirements engineering, and extensive testing with a high focus on so-called ‘sad flows’ and ‘corner cases’. You would need to demand high test coverage in all 4 quadrants of the Agile Testing Quadrants. Your quality strategy for Product Quality should target the determined sweet spot to mitigate safety-critical defects as much as possible.
The second sweet spot to be determined is associated with Design Quality and Code Quality. Both these quality types are related to the internal structure of the software and are addressing aspects like maintainability. The sweet spot to be chosen is dependent on the expected lifetime of the software. E.g. when building a prototype for proof-of-concept purposes only, sweet spot “D” could be appropriate but still from cost point of view you might strive for sweet spot “C”. However if speed is important one might accept a higher cost of rectifying defects, and therefore move towards sweet spot “D” or even “E”.
For sure if you are developing software with a long lifetime e.g. of many years, you would like to choose your sweet spot towards “B” or even beyond; limiting Technical Debt and allowing continuous refactoring and restructuring to keep your Design and Code Quality on a high level. Choosing your sweet spot in this case also depends on the level of your Organizational Quality, e.g. the level of professionalism and craftsmanship of your engineers and the stability of your team. E.g. if you have high attrition you need to spend more effort on maintaining a high level of Design and Code Quality including associated documentation.
To start to define your software quality strategy and associated quality assurance plan you should determine the sweet spots you want to be for Product Quality and for Design and Code Quality.
Unfortunately, it is not measurable where you are and there is no guarantee you will reach your targeted sweet spot with your planned activities. However, defining the targeted sweet spot will serve as a direction for your team. It enables the team to think about activities and actions to be taken to be as close as possible to the targeted sweet spot. Expert opinion, supported by metrics, should indicate whether this is the case.
As an example, when a sweet spot “B” or higher is targeted for Product Quality the team knows they have to spend more than sufficient time in requirements engineering and think about the amount of refinement, documenting, reviewing and verifying by early feedback loops. Next to that, a level “B” will, for sure, require an extensive level of testing. The team needs to think about how much testing needs to be performed, what coverage they would like to see and what types of tests.
Additionally, if you would like to achieve a “B” sweet spot for Design and Code Quality as well, because you will need to maintain the code for many years, the team needs to think about measuring Code Quality by performing Static Code Analysis supported by tooling like TiCS. Which warnings need to be addressed and which warnings do not, or maybe even all of them? Which tools are to be used against which coding standard?
Decisions about measuring Cyclomatic Complexity, measuring dead code, measuring code duplication, and how to deal with the results need to be made to achieve your goal. Agree on Naming Conventions and decide on your reviewing strategy and process.
For Design Quality, the team could think about design methodologies to be applied. Defining Architectural rules and Design rules which subsequently could be checked against the actual implementation in the code and how to deal with violations of those design rules. In this context tooling can be used for analyzing the code and measuring cyclic dependencies and relations between modules on different levels.
Also, the team could decide to create a Technical Debt backlog in which items of Technical Debt are recorded. One could estimate the size of each Technical Debt Item in story points and calculate the total estimated size. Defining a policy on how to handle and prioritize these technical debt items then is needed.
As you can see, there are many aspects that influence your choices to be made considering your software quality strategy. In many cases, a software quality strategy is not created explicitly but originates implicitly.
Make your software quality strategy explicit. First, you need to determine where you want to be, dependent on your application and the expected lifetime of your software. Secondly, you need to make the right choices in all possible measures to achieve your set target. Make it all explicit and communicate it.
Good luck in defining your software quality strategy!