By Christopher Koch, SAP
If IT spending plans are any guide, our current obsession with AI is making our last obsession, digital transformation, seem quaint. A survey by ClearML found that 91% of respondents plan to use internal resources and staff to build generative AI (GenAI) capabilities.
However, these companies seem to have forgotten about the sorry state of their data and the infrastructure they need for AI to succeed. Creating stores of reliable data and the systems needed to power them—the core of most companies’ ongoing digital transformation efforts—is plodding work. The data tortoise to the GenAI hare.
Companies may be putting their money on the hare right now, but some see the importance of a sound, old-fashioned data strategy. Financial services companies, which have been investing in AI for years, are creating new roles to focus on both AI and the underlying data systems. JP MorganChase, for example, has a chief data and analytics officer who sits on its operating committee and reports to the CEO.
Taking the long view is crucial to success with AI. And CFOs appear to have gotten the memo. McKinsey found a 25% increase from 2023 to 2024 in the number of CFOs who say that long-term planning and resource allocation are their top priorities, including for technology. And their focus on strategic planning shot up by 22%.
CFOs have a strategic role to play in AI
Among the most important strategic decisions for AI will be how to fund it, which sits squarely in the CFO’s domain. An estimate by ClearML suggests that the first year of training, fine-tuning, and running a large language model (LLM) for 3,000 employees hovers around US$1 million using an in-house team.
To learn more about how CFOs contribute to their company’s AI strategy, as well as how AI will affect the finance department, read AI in finance: Myths, misconceptions, and reality.
For a technology-intensive financial services company like JPMorganChase, $1 million is a crumb in a bread factory. But in most companies, that $1 million might be diverted from those slower-moving data and infrastructure improvement projects.
That would be a mistake. A survey by S&P Global found that 42% of companies in 2025 are abandoning the majority of their AI initiatives before they reach production, compared to just 17% a year earlier. Companies that kept going were more likely to have focused on data availability, as well as on managing compliance and risk.
Don’t let AI get ahead of strategy
The AI abandonment rate underlines the need to continue investing in both digital transformation and AI, without hamstringing AI progress.
“Traditional data management operations are too slow, too structured, and too rigid for AI teams,” says Roxane Edjlali, senior director analyst at Gartner. “Moreover, in traditional data management, uses of data are not well documented, and data is often collected in siloes across various repositories, multiple systems, and platforms.”
Though it will raise costs in the short term, companies will likely be better off in the long term if they focus on AI implementations that can run on commercially available LLMs while they shore up the infrastructure they need to build more powerful AI capabilities in-house.
The advantages of going external for AI
Here are the key advantages to using commercial LLMs in the short term:
- Ease of use. Vendors and consultants know AI and the LLMs better than the employees of all but the most technology-intensive companies. That means less need for internal technology expertise.
- Lower initial cost. Just plug into the vendor’s LLM. It’s more expensive to pay for LLMs by the sip. But the upfront costs of building a bespoke LLM are high, and, until the internal data quality improves, the output will be less reliable than from a commercial model.
- Continuous updates. There’s no need to upgrade software or infrastructure as LLM technology improves.
- Growth as needed. Since commercial LLMs are cloud based, it’s easy to get more data when it’s needed.
Risks to watch for
All that said, every strategy comes with risks. Here are a few to watch for:
- Higher long-run costs. Researchers agree that using commercially available LLMs is more expensive than building your own over the long term.
- Outages. LLMs are a new technology, and usage is skyrocketing, which is leading to strains on these systems and more frequent outages, according to research. Users can neither predict nor control these.
- Privacy issues. Using sensitive data in a query to a commercial LLM could violate a company’s privacy policies and the EU’s AI privacy law.
- Lack of flexibility. As employees become more adept at querying LLMs, they may find that a given commercial LLM doesn’t always give them the results they need. Companies may need to provide access to more than one LLM to get the best results for their teams.
Let’s call them learning moments
Strategy experts advise thinking about these disadvantages as learning moments. Experience with external AI will help inform decisions about using AI internally. All in all, the advantages of focusing on improving internal data and infrastructure outweigh the disadvantages that come from relying on an external LLM—for now.
To learn more about how CFOs contribute to their company’s AI strategy, as well as how AI will affect the finance department, read AI in finance: Myths, misconceptions, and reality.
This story originally published on sap.com.