As generative artificial intelligence tools such as ChatGPT, Claude, Gemini and others see increased use among professionals across industries, some organizations may consider building their own, specialized GenAI platforms for either their own or their customers’ use (or both). It’s an effort that can come with multiple benefits, including solutions tailored for a company’s specific industry and clients, the ability to better control sensitive data, better experiences and outcomes for users, and much more.

However, building a bespoke GenAI solution also comes with significant complications that organizations need to be aware of before they begin the process. Below, members of Forbes Technology Council discuss some of the challenges inherent in building specialized GenAI platforms and how they can be addressed.

1. Striking A Balance Between Expert Feedback And AI

When developing a specialized GenAI model for an industry that is heavily reliant on human domain expertise, such as manufacturing, striking a balance between expert feedback and AI is key. GenAI on its own can hallucinate. Organizations must pair GenAI with a library of domain data and AI algorithms that learn from expert feedback to avoid inaccuracies and drive reliable outputs. – Saar Yoskovitz, Augury

2. Controlling The System’s Access To Data

GenAI is top of mind, and for good reason. Productivity increases are a game-changer, but without good governance, disaster can occur. Access to data by these systems must be carefully monitored and controlled, and—just as important—user prompts should be too. Asking, “Show me addresses of single high-net-worth individuals in my zip code” is a red flag, and if data is returned, a big problem. – Devin Redmond, Theta Lake

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

3. Preventing Leaks Of Customers’ Intellectual Property

Preventing leaks of one customer’s IP into another’s results—and earning the trust of business users highly concerned with this risk—is a critical challenge. In response, builders should prioritize the use of public datasets, explicitly define how customer data will be used and establish strict data usage agreements to ensure each user and customer won’t share or use protected information. – Tal Lev-Ami, Cloudinary

4. Ensuring Data Privacy And Regulatory Compliance

A key challenge in developing a generative AI platform is ensuring data privacy and regulatory compliance. Address this by forming a cross-functional data governance team, using advanced anonymization and security measures, adopting privacy-preserving techniques such as federated learning, and maintaining transparency with stakeholders to build trust and demonstrate ethical practices. – Dr. Suresh Rajappa, KPMG LLP

5. Managing Training Data Quality And Diversity

A key challenge in developing specialized generative AI is managing training data quality and diversity. Generative AI models need vast amounts of high-quality data, especially in nuanced fields. Organizations should invest in comprehensive datasets and continuous data collection and refinement. Collaborating with industry experts for data curation and labeling enhances quality and ensures ethical use. – Tim Bates, Oppos

6. Building An Infrastructure That Supports High Speed And Scaling

Generative AI is only as good as the data it relies on. To develop a successful platform, all requisite data must be aggregated on an infrastructure that supports high speed and scaling. This includes support for less-structured data previously not considered “analytics grade.” Data lakehouses support these requirements and offer cost-effective storage and computing. – Michael Meucci, Arcadia

7. Accounting For Bias Within Training Data

Accounting for bias within training data is a key challenge when building a specialized GenAI platform, as this affects the quality and reliability of the output generated. To assess and mitigate bias, use open-source toolkits such as IBM’s AI Fairness 360, Google’s What-If Tool and Microsoft’s Fairlearn. Incorporate human feedback loops to continuously evaluate and refine the AI model outputs. – Raj Neervannan, AlphaSense, Inc.

8. Keeping Up With AI’s Evolution

Imagine building a superpowered GenAI platform—one that provides blazing-fast answers and top-notch quality while being cost-efficient and secure. The challenge? The world of AI keeps evolving. The secret weapon? An “orchestration layer” that adapts to any large language model, constantly self-learning and optimizing for speed, quality and cost. This makes your GenAI platform a self-evolving champion, always ahead of the curve. – Sunil Dixit, FIRST ABU DHABI BANK

9. Earning Users’ Trust And Ensuring Transparency

Ensuring users’ trust and transparency is essential in developing a generative AI platform. Achieve this by making model outputs explainable and educating users on AI operations and limitations. Maintain transparency in development and deployment processes. Implement feedback mechanisms for continuous improvement and issue reporting. This builds confidence and ensures responsible, accountable AI usage. – Michael Haske, Krista.ai

10. Accessing The Necessary Computing Power

The single most challenging hurdle to clear is the limited availability of the compute processing power needed to allow you to develop and iterate AI. Right now, larger corporations control computer hardware and focus more on new products. Bringing a solution to this scarcity of GPUs will help. – Daniel Keller, InFlux Technologies Limited (FLUX)

11. Measuring The Platform’s Efficiency

Measuring the efficiency of a specialized generative AI platform is challenging and usually requires human evaluation. Organizations can hire field experts or develop an automated evaluation system to address this. Alternatively, using advanced AI models to evaluate the platform’s output can provide a scalable and practical solution, ensuring reliable assessments without extensive manual efforts. – Rodion Telpizov, SmartJobBoard

12. Securing Executive Sponsorship

For a first major AI project, the key challenge is securing executive sponsorship. Achieving maximum return requires disrupting workflows and continuous experimentation. Without long-term executive commitment, a setback or two can shut down the project before it succeeds. Executive buy-in is crucial to sustain momentum and obtain the necessary resources and support. – Alon Goren, AnswerRocket

13. Establishing Clear, Focused Objectives

There’s a common misconception that AI is a jack-of-all-trades rather than a specialized tool, which can lead to unrealistic expectations. Establishing clear, focused objectives and tasks will decrease the instances of hallucination and enhance the quality and effectiveness of the AI platform, ultimately leading to more successful outcomes. – Joseph Ours, Centric Consulting

14. Reducing Hallucinations And Errors

Artificial hallucinations created by GenAI can cause users to lose trust in the platform, damaging the tools’ effectiveness and reputation. Therefore, it’s essential to build in techniques to reduce errors, such as retrieval-augmented generation to constrain the dataset to relevant sources, content traceability and keeping humans in the loop to ensure the accuracy of the generated content. – Patrick Smith, Certara

15. Taking A ‘Slow And Steady’ Approach To Training

I’ve addressed the key challenge in developing generative AI for the past two years, and it’s not what most think. Imagine AI as a child. What does a child need for long-term success? Structured guidance to process new information. By opting for slightly slower upfront efforts leveraging supervised training rather than the fast path of unsupervised training, you’ll get fewer hallucinations and more reliable outputs. – Rob Tillman, Copy Chief©

16. Ensuring Its Ethical And Responsible Use

A major challenge in developing a specialized generative AI platform is ensuring ethical and responsible AI use. Organizations should establish clear ethical guidelines and governance frameworks to address bias, transparency and accountability. Regular audits, fostering a culture of ethical AI and stakeholder engagement are crucial to building trust and ensuring responsible AI deployment. – Rohit Garg, Meta

17. Anticipating Marginal And/Or Worsening Performance

Anticipate where your efforts will be marginal in the face of AI lab advancements, such as ChatGPT 5 and Claude 4. There is precedent in the academic space of LLMs that are fine-tuned on domain knowledge performing worse than generally trained commercial models. In addition, research shows that fine-tuning weakens alignment, which is used to both discourage “dangerous” use cases and reduce hallucinations. – James Ding, DraftWise

18. Balancing Innovation And Accuracy

Be prepared to balance innovation and accuracy. Current AI solutions are often prone to providing inaccurate or unexpected results, yet to achieve 100% accuracy, one might need to stick to very limited scenarios, which could limit the AI’s value. This can be mitigated by establishing robust monitoring and evaluation frameworks that continuously assess the AI’s outputs. – Itzik Levy, vcita

19. Maintaining A Calibrated Data Ecosystem

Any data stack must undergo a rigorous security and compliance evaluation before it’s used for AI. But clean, compliant data is just the start: Maintaining a calibrated data ecosystem is vital for AI success. Establishing robust governance and educating end users is crucial for developing a bespoke generative AI solution. – Kathleen Hurley, Sage, Inc.

20. Developing A Solution In Search Of A Problem

As organizations continue to invest in and learn how to leverage AI, there will be increased demand for specialized platforms. With this demand, companies could fall into a trap of “AI enablement in search of a problem,” leading to unrealized results and excessive spending. To prevent this, before seeking AI enablement, companies must first understand the AI business case, process and datasets required. – Robert Chapman, 101 Solutions

Share.

Leave A Reply

Exit mobile version