When a tech team completes the build of a new software product, a leader’s job isn’t done. The next essential step is ensuring that the new tech performs as intended and meets customers’ expectations.
A variety of key performance indicators, tracked both before and after release to the market, can tell tech teams what’s working, what isn’t and where improvements can be made. Below, 16 members of Forbes Technology Council discuss KPIs they rely on for assessing software quality and why they provide valuable insights.
1. Revenue Generated
The most important metric for almost any internet business is how much revenue is generated. We focus our efforts and prioritize all our initiatives around maximizing customer conversion rates and increasing revenue and profits. – Adam Ayers, Number 5
2. Defect Escape Rate
The defect escape rate has stood the test of time as the one metric that gives us enough visibility to continuously optimize processes, methodologies and automation investments pre-launch. The bug encounter rate, measured post-launch, gives us a customer impact view that’s better than any other measure at telling us what’s slipping through. Additionally, it gives us the means to make the right priority calls in addressing defects. – Shailaja Shankar, Cisco
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
3. Defects Identified During UAT
The number of defects identified by users during user acceptance testing is an important KPI to understand our team’s quality. It not only sheds light on the quality of the software, but also on where the different teams in the software development life cycle need to improve. – Selva Pandian, DemandBlue
4. Net Promoter Score
Everything we build is in the service of our end users. Software quality impacts the KPI that keeps a pulse on whether we continue to deliver value to our users. In many cases, this is the net promoter score, but it could be a similar KPI that gives insight into user loyalty and satisfaction. It is a very strong lagging indicator that indicates whether the numerous cogs of the software machine are well-oiled. – Rahul Rao, Understood.org
5. Defect Leak
For software that is already in the market, we have found that defect leak—the number of defects found by customer users divided by the total number of defects found—is a good proxy for quality. It’s a great measure of the effectiveness of our internal quality assurance. – Sanjay Gidwani, Copado
6. Customer Feedback
Ultimately, what matters the most at all times is customer feedback. There are many other KPIs that our product, quality assurance and development teams apply, but all these KPIs serve to ensure we get excellent customer feedback on each new capability we roll out. – Maria Scott, TAINA Technology
7. Number Of Active Users
We keep a close eye on the number of active users and the growth of this number over time. If users find our software valuable enough to use regularly and they keep coming back, that means we’re adding value in their lives and/or roles. In my view, “quality” can also be defined as effectively addressing a demand in the market. If the number of active users grows, that means we’re building something that’s useful and valuable. – Emilien Sanchez, Whaly
8. User Experience
User experience is an important KPI for assessing the quality of the software a team builds because it directly impacts how users interact with and perceive the product. A positive user experience is a strong indicator of a high-quality software product, and it drives customer satisfaction and improves user adoption and retention. In the long term, it can reduce support and training costs as well. – Qusai Mahesri, Xpediant Digital
9. Code Coverage
I’d highlight code coverage as a significant KPI. It measures the percentage of code that is covered by automated tests, giving us a sense of the potential for undiscovered bugs. High coverage usually means fewer bugs and better maintenance. – Sandro Shubladze, Datamam
10. Bugs And Bug Trends
Tracking bugs and bug trends related to positive and negative testing are excellent metrics (and a best practice) when quality testing software releases. These can be trended based on the size and complexity of the release and will help shape better unit and integration testing practices so that the quality-testing stage gets more efficient. – Mark Schlesinger, Broadridge Financial Solutions
11. Mean Time To Detect
Mean time to detect calculates the length of time it typically takes for a software flaw or problem to be discovered. This KPI offers important information into how effectively the testing and monitoring processes find and fix software issues. – Neelima Mangal, Spectrum North
12. Customer Churn
The No. 1 KPI I rely on when assessing the quality of our software is customer churn. Our product is designed to help people as long as they own a business. If we see a large spike in users canceling or failing to renew, we know that something’s not right, and we need to take a closer look. Generally, we will look at our analytics and reach out to existing customers to find the problem. – Thomas Griffin, OptinMonster
13. Mean Time To Repair
We evaluate software quality by looking at the mean time to repair, which measures the average time it takes to resolve defects or issues that arise in the software after its release. MTTR reflects the efficiency of our team’s bug-fixing process and the overall robustness of the software. A low MTTR means we can quickly identify and address problems, leading to higher customer satisfaction. – Cristian Randieri, Intellisystem Technologies
14. Feature Usage Ratio
One KPI I lean on is the feature usage ratio. It’s simple: We check how many of our users are actually using each feature we’ve built. If a feature isn’t getting much use, it’s a red flag that we may not be building what our users really need or that we’re failing at communication and training. This helps us stay lean, effective and user-focused. – Andres Zunino, ZirconTech
15. Adoption Velocity
Adoption velocity is a crucial KPI for assessing software quality. It measures how quickly new features or updates are adopted by users after release. A high adoption velocity rate indicates that the software is delivering value and effectively meeting users’ needs. On the other hand, a slow adoption rate may indicate issues with usability, communication or the relevance of the changes. – Jagadish Gokavarapu, Wissen Infotech
16. Energy Utilization; CPU Cost
Energy utilization and CPU cost are important KPIs for us. The monetary cost and environmental impact of poorly optimized software can be significant. If a codebase takes more CPU or RAM to run, it inevitably increases the hardware, cloud or energy spend required to operate it. Failures in optimization can often also point to other defects that may be operationally sound (that is, non-breaking), but could be detrimental in other ways. – Christopher Dean, Digital Tactics Ltd.