Business leaders have been using the same innovation strategies for ages, but there are statistical tools and frameworks that can guarantee improvement for modern businesses in an AI-powered economy. What made pioneering British data scientist Ronald Fisher’s approach so revolutionary in the early 20th century wasn’t just that it brought math into science—it was how economically it did so. By introducing randomization and orthogonal designs that test factors independently of each other, Fisher showed that you could extract powerful insights from surprisingly few trials. Each experiment was structured so that multiple variables could be tested simultaneously allowing researchers to see the true effect of each factor individually and capture the effects of their interactions.
This efficiency turned experimentation from an expensive, one-factor-at-a-time exercise into a disciplined, resource-saving engine of discovery—a framework that remains one of the most powerful yet underused tools in business today. When teamed with genuine understanding of a business process gained through collaboration and wide inductive inquiry, the results can be transformative.
Going Beyond The A/B Test To Increase Sales
Nearly a century after Fisher’s breakthrough, many business leaders are unaware or imagine that their business processes and challenges are not amenable to meaningful statistical experimentation beyond an A/B test. In an age when companies are data driven and seeking competitive advantage, this makes little sense. Kieron Dey, a leading statistician
and founder of the business consultancy Nobi Group, told me in an interview that he has confronted this directly. Working with a US-based apparel retailer to overcome a revenue downturn in 2020, he convinced them to do more than their proposed A/B test of a layout change to their online store–grouping similar styles or showing them haphazardly. Dey convinced them to do a two week experiment that tested 8 variables, the one they were looking at, as well as the landing page lay-out, the number of items on each page, listing influencer endorsements, remarketing to repeat customers, changing the key words listed and increasing prices by 10%.
They found that while changing the layout as first proposed in the A/B test would lessen the recession-induced sales decline, combining that page layout with the number of items per page increased sales about 25%. As price elasticity didn’t seem to matter to customers, prices increased and margins improved too. In short, this experiment helped the company make more money per product while also selling new products. To apply this strategy to your business, consider X, Y, and Z factors.
When using multifactorial tests, it’s more valuable to include many variables and look for interactions. However, it does take a degree of statistical sophistication to test a large number of factors efficiently. The techniques used are known as fractional factorial designs—experiments that test only a mathematically chosen subset of all possible combinations. While they trade a small amount of precision for speed and efficiency, they are equal to A/B tests in statistical accuracy. Working with Cairo, Egypt-based Mantrac Group, a distributor of heavy equipment, Dey tested a plethora of factors to increase its NPS scores, a common customer loyalty metric. The testing determined which channels to use (email, SMS, What’sApp, field agents, or branches, etc), and when the survey should be administered (after a branch visit, website chat, call center discussion, etc), as well as the survey content.
“Before we think of some big investment that might work, we look at the breadth of possible improvement with an experiment and let the data tell the story,” managing director Leonardo Zambitte told me in an interview. The granularity of what is discovered is notable–among a few other factors, creating a bigger text box for suggestions, adding emojis, and changing the ranking system from three to five raised response rates by 5x; and doubled Mantrac’s Net Promoter Scores.
Even When Designing Experiments, Always Use A Human Touch
In one healthcare case, Dey worked with a Medicare Advantage health care provider XLHealth (now part of Minnesota-based United Healthcare) to reduce hospitalizations and improve the efficiency of its telehealth nurses. Dey and his team collaborated with both management and frontline nurses to identify about twenty potential operational changes to test. Before the experiment began, Dey verified that the necessary data were reliable, that historical results confirmed the results would be unbiased, and that the nurses and patients could be properly randomized.
The variables ranged widely: increasing patient load per nurse, conducting pharmaceutical reviews with patients, providing condition-specific educational kits, scheduling transition calls after hospital discharge, and testing remote versus office-based work, among others.
Over a three-month period, the structured experiment revealed that a specific combination of four interventions reduced hospitalization rates by more than 20%—saving the client millions of dollars annually.
The math behind this is striking. Testing 20 interventions through a well-designed experiment effectively explores over a million possible combinations, uncovering relationships that would be invisible through traditional A/B testing. Many of the results were counterintuitive. Expanding patient loads by 50%, for example, did not diminish care quality, and whether nurses worked from home or the office made no measurable difference. Randy Brown, an economist and then Director of Health Research at Princeton, New Jersey-based Mathematica, who studied the trial, says, “I have tried to convince CEOs to do more of these. They are so valuable but CEOs don’t like change.”
But the real secret wasn’t only in the math—it was in the collaboration. Dey’s team worked closely with frontline nurses to understand what actually happened day to day, challenging assumptions and uncovering hidden barriers to change. In one case, a “pharmaceutical review” variable that tested well was underused. An analysis of call data and a candid discussion with the nurses revealed why: the review was time-consuming, and nurses’ compensation was partly tied to call efficiency. A short discussion with managers to redesign incentives quickly eliminated the impediment, making the review feasible and realizing the gains.
While some CEOs think their business processes are too complex for rigorous
experimentation or that they already know what will work, Dey’s work shows the opposite. His approach combines scientific precision with deep empathy for how organizations function. While mathematicians aren’t typically known for their bedside manner, Dey’s openness and curiosity make experimentation a shared process rather than an imposed one. He begins every experiment by asking managers and employees to predict which factors will matter most. “They’re always wrong,” he says with a smile—and that’s exactly the point. Each experiment begins with what people think they know, and ends by revealing what truly drives results.
Why The Statistics Matter For Profitable Businesses
The ability to directly attribute change to outcome is game-changing. Many companies today rely on attribution modeling: they measure spending across marketing channels-
—digital ads, direct mail, social media—and run regression analyses on historical data to estimate which ones drive results. But those models reveal correlation, not causation. Correlation is noticing two things move together; causation is proving that one moves the other. Attribution modeling can’t tell what actually caused the change. A properly designed experiment can.
By randomizing spending levels across channels in a structured, multifactorial test, a company can isolate the real impact of each variable and discover where incremental dollars actually boost sales. In work with a large insurance company that wanted to optimize its marketing and media strategy, Dey tested 29 variables which included television, radio, search engine marketing, social media channels and content type or display, among others, while accounting for its effects on different sales channels.
The results were easy to see, prompting the company to directly change its mix. For example, money spent on SEM for the brand name could be lowered significantly, compared to SEM for the unbranded product type. The company cut back on traditional media (news outlets, TV and radio) and targeted remaining ads by market segment. The result was a 10% decrease in marketing spending while increasing policies written by 28%.
That precision comes from statistical design. Fisher’s original insight—randomly assigning treatments so that every factor is independent but tested in a real world situation means that only the factors or combination of factors rise above the noise of real life complexity. Once “normal variation” is measured and accounted for, any improvement beyond that baseline can be trusted as real. Whether in manufacturing, healthcare, or marketing, a properly randomized design can separate signal from noise even in complex, dynamic systems. The right design depends on context: a serial operation, like a production line, demands different tools than a parallel one, like multichannel marketing. But in both cases, the math allows decision-makers to act on evidence rather than instinct.
And sometimes, the biggest breakthroughs come from the least likely places. At a printing company Dey worked with, a new and inexperienced employee suggested changing printer paper more often—a trivial idea, or so it seemed. The data proved otherwise: older paper absorbed moisture, became thicker, and slowed production. That small adjustment eliminated bottlenecks. At Knoxville Utilities Board, employees told Dey that a series of tests on customer payment behavior revealed why efforts to drive paperless billing had stalled. It wasn’t messaging or incentives—the older customers simply liked visiting the local office. No model could have shown that, but a structured experiment could.
The point is companies can test smarter. Fisher provided the mathematical tools; Dey demonstrates how to use them, reaching across silos and building experiments on a wide understanding of how things actually work. It will work for your business too.

