Changed all references to the old ISO/IEC 25010 document, such as main quality characteristic “Operability”, to the new ones, such as “Usability”.
There was a demand for a clear diagram mapping TQI metrics to ISO/IEC 25010 quality characteristics. This diagram has been added.
A problem has been detected for rule based metrics. Suppose you have a coding standard with 20 rules at severity level 1 and 1 rule at level 3. In that case violations of the level 3 rule have more impact on TQI than any violation of severity level 1. That is not what is expected. The reason why this happens is that violations are divided by the power of 4 of its severity level minus one and the number of rules for that severity level. So a severity level 1 violation counts 1/(4^0*20) = 1/20 and a severity level 3 violation counts 1/(4^2*1) = 1/16. The solution to this problem is to divide by the average number of rules per severity level. In this case, (20+1)/2 = 10.5 for each severity level. The TQI document has been adjusted accordingly.
All metrics for which the virtual TQI score can be lower than zero, the problem of TQI definition version 3.10 can occur. Fan out is such a metric. In this version of the document, the TQI fan out definition has been adjusted to make sure the TQI score for fan out can never be lower than zero to avoid strange effects during aggregation.
A defect in measuring code duplication has been detected. Code duplication is based on lines of code that are duplicated. Since blank lines and comments are ignored for code duplication, the following could happen: suppose two files A and B are completely identical. Moreover, suppose that 30% of the lines of code in these files are comments or blank lines. Then the code duplication for these completely identical files is only 70%. It should be 100%. The reason for this problem is that lines of code are counted for code duplication, whereas this should have been tokens instead. From now on the TQI code duplication measurement is based on tokens. The document has been adjusted accordingly.
The changes made for version 4.4, i.e. excluding C# interfaces from code duplication, was a mistake. This has been repaired.
For the introduction of the TQI security metric, some checks have been transferred from abstract interpretation to security. One of these checks is the detection of buffer overflows. The abstract interpretation part of the document still contains an example of a buffer overflow. This example has been moved to the TQI security section.
C# interface code duplication can’t be solved because they need to be copied every time the interface is used. The document has been adjusted to exclude C# interfaces from code duplication checking.
It is impossible to get a TQI code duplication score of 0%. Even if you have 100% code duplication the TQI score is still 10%. The score has been adjusted a little bit to make sure that 100% code duplication results in 0% TQI code duplication.
There was a parenthesis missing in the definition of the compliance factor. This has been fixed.
This is a minor upgrade with only some cosmetic changes and typo fixes.
Based on feedback from customers, the security metric has been introduced as one the 8 TQI metrics. Security is becoming more and more important in the software world and it is also one of the ISO/IEC 25010 main quality characteristics. Security is not added as the ninth metric but it replaces TQI metric dead code. There are 2 reasons for this replacement:
The differences between the levels for the TQI cyclomatic complexity definition that has been adopted for version 3.10 appear to be too close to each other. A new definition has been proposed based on the idea that an average cyclomatic complexity of 3 should be level C and an average cyclomatic complexity of 5 should be level F. The document has been updated with this new definition.
The current definition of cyclomatic complexity can lead to unexpected results. A simple example can explain this behavior. Suppose you have 2 files: file A has an average cyclomatic complexity of 3 (TQI score: 80%) and file B has an average complexity of 7 (TQI score: 0%) and suppose both files contain the same amount of functions. Then the overall average cyclomatic complexity will be 5 (TQI score: 40%). Suppose you improve the average cyclomatic complexity of file A from 3 to 2.5 and the average complexity of file B from 7 to 9.5. Then the following happens. The TQI score of file A increases from 80% to 90% (+10%) and from file B remains 0% but the overall TQI cyclomatic complexity drops from 40% to 20% (-20%), which is counterintuitive. This happened a couple of times in practice. The reason why this can happen is that the TQI score for cyclomatic complexity can virtually go beyond zero. A new definition has been adopted that will remain positive for all possible cyclomatic complexity values.
For the programming languages C and C++, header files are also taken into account for the metric code coverage. This is unfair because usually these files don’t contain any code and thus will only influence the code coverage score negatively. The document has been changed to exclude header files from the code coverage definition.
The level F boundary is not correct for compiler warnings. It currently is 83.22% in the document, but according to the definition it should be 85.15%. This has been adjusted.
The definition of code duplication contains arbitrary borders and is much too strict (i.e. > 4% code duplication is level F). The new definition is based on 1% is level C and 10% is level F. The document has been adjusted accordingly.
If multiple compilers are used, the total number of possible compiler warning types increases. This results in a higher TQI score, especially in case files are not compiled for all available compilers. This is not fair. So the TQI is calculated per compiler and then combined. This has been adjusted in the document.
Currently all imported dependencies have the same weight for the fan out metric. This is not fair because importing system libraries has less impact on modularity than importing own modules. Moreover, importing system libraries is a good thing, i.e. it promotes code reuse. In the new definition there has been made a distinction between external fan out (imports of external libraries) and internal fan out (imports of own modules). External fan out has 4 times less impact on the fan out score than internal fan out.
The benchmark figures of cyclomatic complexity have changed a lot due to the fact that the number of lines of code checked by TIOBE increased from 200 MLOC to 300 MLOC. The new benchmark is used now.
In case of interface inheritance, header files for the programming languages C and C++ must be duplicated. It is not fair to consider this code duplication because it can’t be solved. So header files are included now from the code duplication definition.
The differences between level A, B and C for code duplication are too small and between level D, E and F too large. The multiplication factor of the code duplication score has been increased to adjust this.
Fan out for C# is calculated in a simple way, every “using” statement is considered to import 5 entities. Now that it is possible to calculate the exact amount of imported entities the text of the definition has been adapted.
Metric coverage is an indication of how much code can be checked for a metric. This metric property is only used for metrics that use the compliance factor. However, this should hold for all metrics. The metric coverage has been removed from the compliance definition and added as a property of all metrics.
The TQI scores are percentages not just numbers. All TQI scores have been replaced by percentages in the document.
The TQI scores that are needed to reach a certain level do not include the boundary value, e.g. level A is reach in case TQI is greater than 90. That should have been greater than or equal to 90. This has been fixed.
The definition of compiler warnings is much too strict. In most cases it is either level A (no warnings) or level F (in case of a view compiler warnings or more). The definition has been relaxed.
Since all software systems are rated in the same way, there should be a mechanism to differentiate between safety-critical projects, business-critical projects and other projects. Recommendations have been added what level should be achieved for what kind of software.
The TQI label is not looking nice, e.g. the labels A and F are now outlined properly.