The financial services industry is entering a decisive phase in its adoption of artificial intelligence. A focused priority has replaced the excitement that once surrounded model size and technical novelty, turning artificial intelligence (AI) into a trustworthy, value-generating asset that strengthens operational efficiency and meets regulatory expectations. Institutions are integrating AI into core decision-making systems, where failure can create operational, reputational, and customer risks.
Strategic discipline begins with a focus on return on investment.
The earlier phase of AI adoption revealed a common problem. Powerful tools were often deployed without a clearly defined business purpose. Erwin Lu, chief technology officer at Lexin, described this misstep as an “arms race”, noting that “a lot of this money was wasted.” He explained that institutions must now centre their AI strategies on tangible business outcomes. “It’s not that the most advanced is the best, or the most expensive is the best, but the most practical is the best,” he said.
Lu’s team developed a framework called CAR, or “Confidence of AI Result”, to evaluate models based not only on technical performance but also on operational risk, error recovery costs, and ease of integration. In one use case, a three-billion-parameter model outperformed more complex alternatives when used in customer service, particularly when paired with human oversight. According to Lu, the decisive factor was not model strength but workflow design. Human reviewers could quickly and effectively correct the model’s outputs, which improved the tool’s cost-performance ratio.
Su Bo, vice general manager of Spark Enterprise Group at iFlytek, pointed out that leading developers no longer promote parameter counts as the defining feature of a model. “Large models now have two trends: one is getting bigger, and the other is getting smaller,” he said. Effective deployment, he explained, depends on the organisation’s data quality, computing power, staffing capacity, and internal processes. He emphasised that practical application and cost efficiency matter more than technical scale.
This evolving mindset is transforming how financial institutions approach operational performance. AI must be embedded into workflows in a way that produces measurable results while minimising friction and risk.
AI reshapes operations through human–machine collaboration
The recalibration around return on investment has redefined the operating model for AI in financial services. Rather than replacing workers, institutions are focusing on how AI can amplify expertise and streamline internal systems. Lu explained that Lexin had doubled its coding efficiency within three years by deploying AI across development, engineering, and design. This increase in output allowed the company to complete a full rewrite of its legacy core systems. The result was a reduction from 11 million lines of code across a thousand systems to just over two million lines of clean code spread across three platforms.
This transformation reflects a broader shift in how firms deploy talent. Institutions now expect smaller, skilled teams to utilise advanced tools to deliver more comprehensive and complex outcomes.
Juergen Rahmel, a lecturer at the University of Hong Kong and AI lead in the chief digital office at HSBC Hong Kong, highlighted the importance of consistency in outputs. In financial services, this ensures that customers in similar circumstances receive similarly structured advice. This approach reduces risk, strengthens compliance, and builds customer trust. Institutions benefit when human–AI collaboration is designed to support disciplined, replicable action.
Governance determines whether AI becomes an asset or a liability
The more deeply AI is embedded in regulated industries, the more critical governance becomes. Rahmel cautioned that many large models rely on public internet data, which exposes them to manipulation. Malicious actors can pollute training data by injecting false information, which can later influence the model’s outputs in high-stakes scenarios, such as credit scoring or transaction validation. He referenced a growing tactic in financial crime, where deepfake video calls are used to impersonate senior executives and trigger fraudulent payments. These risks affect people and systems alike.
Rahmel argued that financial institutions must move toward smaller, curated models trained on high-quality internal data. These models may lack the scale of general-purpose systems but offer better explainability, lower risk, and greater alignment with regulatory standards. “You cannot use a model with 90% accuracy in a regulated banking scenario,” he said. “It is not good enough.”
Su agreed that model explainability remains a top concern. While engineering techniques exist to mitigate black-box behaviour, foundational gaps remain, particularly in complex or professional tasks that require high-precision responses. He also highlighted operational challenges, including high energy consumption, the cost of computing infrastructure, and the necessity for internal systems to evolve in tandem with AI deployment. These challenges require financial institutions to reassess architecture, team structure, and integration strategy.
Institutions prioritise internal readiness for scalable AI adoption
Education, governance, and internal alignment are essential to effective AI adoption. Rahmel noted that although models can be purchased or licensed from third parties, institutions must still train internal teams to understand how to use them. “You must educate the people who ask the questions, not just those who build the tools,” he said. Failure to do so can result in misinterpretation, misplaced confidence, or poor integration.
He also advocated domain-specific models aligned with business needs and internal governance, instead of large-scale general-purpose systems that do not meet regulatory or organisational requirements.
The future belongs to institutions that build trust and capability in parallel
AI adoption in financial services is entering a more deliberate and structured phase. Lu cautioned that early disillusionment may cause firms to underestimate the long-term impact of the technology. “You easily overestimate the change in one year, and underestimate it in ten,” he said. Su added that the value of AI will depend not on its novelty but on how it is applied. “It may be the shortest-lived or the most valuable technology. What matters is how we use it.”
The financial industry now faces a structural challenge. Institutions must move from experimentation to execution, embedding AI where it delivers business value, mitigating risks through internal control, and preparing staff to interact with intelligent systems as skilled partners. Competitive advantage will favour those who combine model governance, operational design, and institutional readiness in a way that delivers consistent, explainable, and trusted outcomes.