Asad Khan is the Co-Founder & CEO of LambdaTest, an AI-native unified enterprise test execution cloud platform.
The software testing industry is at a defining moment. The global software testing market reached $52.45 billion in 2024 and is projected to grow to over $57 billion in 2025. Organizations are required to deliver new features at speeds and quality expectations that were unseen before. And while developers work hard to create new features, quality assurance is unable to keep up.
So, the question is, should testers own quality? And if yes, do they have the right tools to do so?
Are testers driving quality decisions?
No, but they’re gaining more influence. Testing teams now have a say in release readiness in 86% of organizations, up from 81% the previous year. However, this statistic reveals something concerning. Testers still lack decision-making authority in 14% of companies, where they’re relegated to purely execution roles without strategic input.
I also see a clear evolution where testers are increasingly consulted, but the responsibility for continuous quality remains distributed across development teams. Only 22% of organizations have testing done exclusively by dedicated testers, while 32% report that between 10% and 50% of testing work is performed by non-dedicated team members like developers and product owners.
Do testing teams own quality metrics?
Testing teams are measured primarily on traditional quality indicators, not business outcomes. The same 2025 State of Testing Report reveals that 56% of teams are evaluated on test coverage metrics, 54% on defect metrics and 45% on test execution metrics. What’s alarming is that only 14% are measured on Net Promoter Score—a direct business impact metric that dropped from 18% the previous year.
This disconnect between what testers measure and what businesses value creates introduces a fundamental problem. Teams focus on activities rather than outcomes, making it difficult to demonstrate the strategic value of quality engineering to organizational leadership.
How much do developers actually trust AI for quality tasks?
Developers show skepticism about AI handling quality-critical work. Stack Overflow’s 2025 survey of over 49,000 developers found that 76% of developers don’t plan to use AI for deployment and monitoring tasks, while 69% resist using it for project planning. The primary frustration they found is dealing with “AI solutions that are almost right, but not quite,” which can make debugging more time-consuming.
Despite the majority of software development professionals now using AI tools, I’ve seen trust in AI-generated code accuracy actually decline. This is backed by findings that show trust in AI-generated code falling from 40% to just 29% year over year. When stakes are high, 75% of developers still prefer asking human colleagues for help rather than relying on AI.
What prevents true continuous quality today?
The existing quality toolchains weren’t built for continuous quality. While many organizations have mature test automation frameworks, most of their frameworks don’t integrate seamlessly into their CI/CD pipelines.
Even fewer can automatically generate and maintain end-to-end tests without manual intervention. That gap forces QA teams into a reactive stance where they spend a chunk of their time updating brittle test scripts instead of validating new features.
What are the challenges with AI testing?
As organizations explore AI to help bridge gaps in testing speed and coverage, they quickly run into new challenges that AI alone can’t solve.
There’s an initial learning curve and cultural resistance to AI-generated tests.
It’s hard for teams to trust AI-generated test cases, especially when people are accustomed to having full control over their test suites. To overcome this, start with pilot projects in noncritical areas, allowing teams to build confidence gradually while maintaining manual oversight.
Its “black box” nature makes it difficult to understand AI decisions.
Teams often struggle to understand why a particular test was generated or how the self-healing mechanism made its decisions. This lack of transparency can create significant hesitation and slow adoption across testing teams.
Substantial infrastructure and data requirements exist.
Some AI testing platforms need historical test data, comprehensive application logs and well-documented requirements to function effectively. At LambdaTest, we worked to overcome this by enabling optional access to GitHub repos and letting the agent learn from real dev conversations in PRs. But organizations with legacy systems or inadequate documentation may need to create some of the docs for AI-native testing.
There’s a risk of amplifying existing biases in test coverage.
If your manual tests historically overlooked certain edge cases, the AI may perpetuate those gaps. Conduct regular audits of AI-generated tests and maintain a hybrid approach where experienced testers review and validate the AI’s work, especially for high-risk functionality.
Complementary strategies beyond AI alone must be considered.
Organizations should combine AI tools with strategies like shifting quality left through developer-owned tests, robust code review practices and shared quality ownership. Invest in observability and monitoring tools like OpenTelemetry, DataDog or SigNoz for real-time quality signals.
Additionally, AI-native testing platforms have become more common for automating test creation, maintenance and analysis, helping to free testers to focus on strategy and exploratory testing rather than script maintenance.
What does the future hold for quality responsibility?
The data points toward a hybrid model where quality responsibility becomes increasingly distributed. And automation is reducing the need for non-testers to dedicate time on maintaining tests. In my eyes, the verdict is clear: Software testers are not solely responsible for continuous quality, nor should they be.
The most effective approach combines dedicated quality expertise with distributed quality ownership across development teams. Success here requires aligning quality metrics with business outcomes, investing in both technical and communication skills, and maintaining the strategic independence of quality engineering while embedding quality thinking throughout the organization.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
