AI in financial services has moved past the hype, but implementation still stalls where it matters most: data quality, internal capabilities, and practical governance. To understand what’s working in the real world, I spoke with three leaders building the next generation of no-code and low-code AI tools: Christian Buckner, Head of Data and AI at Altair which owns RapidMiner; Michael Berthold, CEO and co-founder of KNIME; and Devavrat Shah, professor of AI at MIT and co-founder of Ikigai Labs. What emerged was a clear playbook for banks, insurers, and fintechs looking to leverage AI safely and effectively. Start by fixing your data before chasing models. Use AI to amplify your domain experts, not sideline them. Prioritize explainability and guardrails over novelty. And stop chasing flashy chatbot demos, instead, build focused, contextual tools that do the unglamorous work of planning, reconciling, and forecasting. This is what it looks like when financial institutions take AI seriously.
Christian Buckner: Christian Buckner, Head of Data and AI at Altair
Forget Flashy AI, Fix Your Data First
The biggest obstacle to effective AI isn’t regulation, risk, or technical know-how. It’s data. All three speakers echoed the same frustration: siloed systems. Whether you’re in banking, insurance, or asset management, chances are your data lives in too many places, governed by too many people, in formats no one trusts. AI can’t fix that. In fact, it only amplifies the mess if used too early.
Christian Buckner emphasized that real progress starts with integrating and contextualizing data. He highlighted the use of knowledge graphs to unify previously disconnected systems, calling it the foundation for safe and scalable automation. “You need a contextual model that includes both internal data and external rules like regulatory constraints,” he said. “Once that’s in place, automation becomes far more reliable, and hallucinations from generative models can be eliminated entirely.”
Devavrat Shah agreed. “We have a ton of data sources and workflow tools, but the AI tooling that connects them is still missing,” he said. “What we need are specialized, purpose-built models that live within the enterprise and understand the specific tasks they’re built to support.”
Devavrat Shah, professor of AI at MIT and co-founder of Ikigai Labs.
Trust Doesn’t Mean Perfect, It Means Predictable
We talked at length about trust, not in theory but in practice. Can a bank trust AI to approve a mortgage, flag fraud, or run a forecast? Shah made it clear that AI is not a crystal ball. “AI is not perfect by design. It provides directional information. The key is to treat it like an input, not an answer,” he explained. He drew a parallel with betting strategies: if AI has a 51 percent edge, you don’t bet everything. You diversify, manage risk, and make decisions accordingly.
Michael Berthold added that the context in which AI is used determines how much trust is appropriate. “If I’m looking for trends in data, the model doesn’t need to be perfect. But if I’m forecasting revenues or hiring, it needs to be very accurate,” he said. He stressed the importance of transparency. “Too many systems give you a result with no way to dig into how it was calculated. That’s unacceptable in finance.”
Buckner noted that governance must be built into the data layer itself. “You define who sees what, what models can do with that data, and how outcomes are evaluated. Then you can add traceability so every action is auditable,” he said. “If the model steps outside its boundary, the request fails. That’s how you build trust.”
Michael Berthold, CEO and co-founder of KNIME
No-Code AI Isn’t For Amateurs, It’s For Speed
There is a lingering misconception that no-code platforms are simplistic. But as Berthold put it, “We see teams move from massive Excel macros to KNIME workflows that are faster, safer, and auditable. It’s not about removing complexity, it’s about handling it responsibly.”
He emphasized that tools like KNIME let users build automated workflows without knowing how to code, while still requiring them to understand the logic behind each model. “Data literacy is key. You don’t need to know how a method is implemented, but you need to know what it does,” he said.
Buckner expanded on this, describing how RapidMiner lets non-technical teams act independently without losing oversight. “If you can empower your domain experts to tweak visualizations or run their own analysis, you eliminate bottlenecks,” he said. “Meanwhile, expert users can focus on the high-value, high-impact problems.”
This dual-mode approach enables collaboration rather than isolation. As Buckner explained, “Business teams can move quickly without compromising security or quality, because they’re operating within guardrails defined by the platform.”
Specialized AI Beats Generalized AI
When the conversation turned to model architecture, all three leaders rejected the idea that bigger is always better. Shah, in particular, was clear: “The current model where a few companies own massive models and everyone else consumes them is not the endgame. The future is small, contextual models that live within the enterprise.”
These role-based agents are more efficient, cheaper to run, and far less risky. They can live inside a firm’s firewall, interact directly with structured internal data, and avoid the data leakage concerns associated with using external APIs.
Berthold noted that even predictive AI applications like credit scoring or risk simulations don’t need large models. “You can build highly effective predictive models from existing datasets, and you can run ‘what if’ simulations to explore different decisions without exposing data to the cloud.”
This is especially appealing to risk-averse financial institutions, which can now apply AI without compromising control or regulatory compliance.
AI Won’t Replace People, It Will Change How We Work
All three experts agreed that AI’s real power lies not in automation for its own sake, but in augmentation. Berthold predicted that interaction models will shift away from the current “chatbot everything” trend. “The next frontier is AI that quietly observes and offers meaningful suggestions, like a co-pilot, not a search bar,” he said.
Shah described this as a natural evolution of the human-machine relationship. “We used to have to learn the machine’s language. Now the machine is learning ours. That opens the door to more intuitive, collaborative systems,” he said. But he was quick to add a caveat: “Explainability is now just as important as accuracy. If people don’t understand the output, they won’t use it. Period.”
Buckner framed the future in terms of speed and scale. “You can onboard ten AI agents faster than hiring one new analyst,” he said. “But it’s not about replacing people. It’s about giving your team leverage to work smarter, faster, and with more confidence.”
Takeaways For Financial Institutions
From these three perspectives, several clear lessons emerge for banks, insurers, and fintechs seeking to implement AI safely and effectively:
Start with data integration, not model training: Building a contextual foundation using knowledge graphs or structured workflows pays dividends. Most failures stem from poor data hygiene, not bad algorithms.
Use AI to amplify domain experts, not replace them: No-code tools allow risk, finance, and compliance staff to build their own workflows while reserving complex tasks for data scientists.
Prioritize explainability and governance: AI outputs should be traceable, auditable, and embedded with compliance rules. If a model can’t explain itself, it doesn’t belong in a financial setting.
Don’t chase flashy use cases: Many of the most valuable applications are “boring” internal optimizations, budgeting, forecasting, reconciliation, not chatbot front ends.
Smaller models are often better: Focused, context-aware AI agents tied to specific roles or workflows are easier to deploy, govern, and trust.
Invest in data literacy: Giving tools to business users without training is a recipe for failure. Literacy enables responsible experimentation.
What Matters Next
The real winners won’t be the firms chasing headlines or pouring money into the biggest model. They’ll be the ones quietly building robust, interpretable systems that let humans and machines work side by side. And if this conversation was any indication, that future is already under construction.
For more like this on Forbes, check out How AI, Data Science, And Machine Learning Are Shaping The Future and Who Owns The Algorithm? The Legal Gray Zone In AI Trading.