Adam Lieberman, head of artificial intelligence and machine learning at Finastra.

With the introduction of ChatGPT, the promise of generative AI (GenAI) and large language models (LLMs) has taken the world by storm. Even at this early stage, a wide range of use cases for these technologies has emerged across a variety of industries, including in my field of financial services. At the same time, foundational models have democratized the ability to create and use AI tools via APIs, and implementing GenAI solutions at the enterprise level is now more of a software engineering challenge than a hard data-science problem.

Is Traditional Machine Learning The Answer?

Amid this GenAI fervor, some organizations may be tempted to deploy the latest LLM technology for problems that don’t require such a sophisticated solution. As LLMs become increasingly accessible, technology leaders need to recognize the importance of evaluating the necessity and efficiency of using such advanced tools for specific problems—especially when a simpler solution may deliver a better outcome. In fact, many of the software challenges organizations face today can be solved by traditional machine learning (ML) models.

To illustrate this point, here are scenarios in which traditional machine learning may be the answer.

There’s one specific problem to solve.

If an organization has a specific use case, solvable by traditional statistical learning, leveraging an LLM may not be the ideal solution when there are requirements for latency and explainability. That said, beginning with an LLM can be a strategic choice, especially for tasks that benefit from zero-shot or few-shot learnings. For instance, in sentiment analysis of customer reviews—using automation to determine whether feedback is positive or negative—the versatile learning capabilities of an LLM might simplify development. However, if this approach doesn’t yield the desired results, transitioning to traditional statistical methods may be more effective. This strategy allows for the initial exploitation of an LLM’s broad applicability with the option to shift toward more conventional techniques, potentially bringing together the best of both worlds to create more refined solutions.

Cost and energy efficiencies are a priority.

Implementing a traditional machine learning model can be more cost-effective than developing a cutting-edge, customized LLM—particularly if an organization has already built the ML model. LLMs also use a significant amount of energy, so there’s an environmental factor for organizations to consider. And although some ML models can take hours to train and might still incur considerable costs, LLMs can be more expensive depending on the foundational model to host and utilize, making this a technical decision as well as a financial one. In scenarios where the application is specific and well-defined, opting to develop a bespoke ML model could be more cost-effective and sustainable than adopting or building a comprehensive LLM. This nuanced decision-making process underscores the importance of evaluating both the immediate and long-term impacts of choosing between LLMs and traditional ML models, factoring in efficiency, cost and organizational values.

Traditional ML models have a track record of success in the organization.

Integrating new LLMs with traditional ML models leverages the strengths of each to create a powerful and cohesive system. An LLM can respond to incoming queries by itself or dynamically consult a more appropriate traditional model, enhancing efficiency and accuracy. For instance, in a customer service scenario, an integrated system can predict the likelihood of customer churn, suggest strategies for retention and identify opportunities for upselling.

What’s particularly powerful about this setup is the ability to manage and utilize hundreds of specialized traditional ML models under the umbrella of a single LLM. This approach ensures that specific queries are addressed by the most capable model, from initiating conversations with a chatbot adept at recognizing signs of customer dissatisfaction to transitioning to solutions that enhance customer loyalty and increase sales. The true strength lies in the system’s capacity to intelligently navigate through a vast array of ML models and deliver targeted outcomes.

Strict governance is required.

As an example, payment fraud detection is a critical area where explainable models are essential. In such cases, statistical methods are preferable because they provide transparency and traceability, which are crucial for compliance and auditability. LLMs, although powerful, are primarily designed for text processing and lack the inherent governance frameworks needed for sensitive applications like fraud detection. On the other hand, summarizing customer reviews presents an ideal use case for LLMs because these models excel at text classification without the need for extensive retraining or data collection. Their pre-existing knowledge base allows them to efficiently categorize and analyze reviews, making them an ideal tool for businesses looking to understand customer feedback quickly and accurately.

Traditional ML Applications For Financial Services

There are many areas in the financial services field where organizations can solve problems with traditional ML models instead of developing and leveraging a brand-new LLM. Specific examples where traditional machine learning is likely the right answer in a financial services setting include:

• Recommending the best financial products to account holders.

• Analyzing accounts to predict customer churn and account holders’ likelihood of leaving a financial institution.

• Enhancing network security by analyzing IP addresses and blocking users that pose a threat.

There’s a lot of noise right now about GenAI and LLMs, and my team is among the many who are excited about the use cases these technologies offer. At the same time, resource constraints are a reality for many organizations. In the quest for innovative and sophisticated AI technologies, I encourage technology and AI leaders to remember Occam’s Razor—the principle suggesting that the simplest explanation is often the best one.

As the field of AI surges forward, especially within the rapidly evolving landscape of large language models, it’s thrilling to witness the continued excellence in traditional machine learning research. The burgeoning synergy between advanced LLMs and foundational ML techniques is particularly exciting and promises innovative solutions that leverage the strengths of both worlds.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share.

Leave A Reply

Exit mobile version