Olga Megorskaya is Founder and CEO of Toloka AI, a data-centric AI solution that generates machine learning data at scale.

The rapid adoption of AI solutions is matched by growing public unease about AI use. The latest AI Index report from Stanford University reveals that 52% of people feel anxious about AI products and services compared to 38% just two years ago. The rising concern about AI underscores a critical challenge: How can we and the wider industry ensure AI earns and deserves our trust?

To build trustworthy AI systems, developers should focus on fairness, interpretability, privacy, safety and security—the pillars of responsible AI. Best practices include human-centered design, thorough testing and evaluation, curating representative training datasets, checking for bias, handling data carefully, identifying security threats and monitoring models in production. None of these steps are simple or straightforward, and much research and practical work is still needed to address the ethical challenges inherent in AI development.

Developers and data providers must act now and collaborate to craft AI systems that inspire trust.

1. Evaluating And Testing AI systems

For consumers to trust AI, they must be confident that these systems are thoroughly evaluated before entering the market. Over the past two years, generative AI systems have been caught providing false information and harmful content, making threats and spreading biased outputs. With widespread media coverage, these failures have created a lasting impression on the public.

ChatGPT and Bing aren’t the only ones in the spotlight. With the rise of AI assistants designed to work in specific domains, there is a high risk of these models offering inappropriate answers on sensitive topics, like the chatbot for an eating disorder helpline that was shut down for giving detrimental advice. LLMs are susceptible to attacks like prompt injection, which can trick a model into leaking sensitive information or performing dangerous actions. They can also be misused to aid crime and terrorism or to promote hate and misinformation. We don’t know what risks could emerge with new capabilities in future models, but self-replication and psychological manipulation are genuine concerns.

The only way to prevent malicious use and unauthorized behavior is by testing models extensively. There are three critical components of rigorous evaluation, which should be integral to every AI development cycle. First, a safety policy needs to be established for a specific model’s use case, and model responses are then tested to see if they violate the policy. Additional benchmarking looks for bias or unfairness in model responses. Finally, red teaming identifies weaknesses by having an independent team challenge the model to provoke undesired behavior. These requirements are more than just “nice to have”; they will soon be enforced by governments, with new AI regulations emerging in Europe and the United States.

But will these new regulations and standards be enough to improve AI safety and earn consumers’ trust? We need to be proactive and raise the bar.

2. Promoting Transparency And Collaboration

AI developers can earn trust by being more transparent about collecting training data and building their models.

The AI research community is already making significant efforts to build and share public datasets. A recent example is our company’s Beemo project, a collaboration between academia, industry, and the community to produce a publicly available benchmark for detecting AI-generated text. Anyone can use the dataset to improve artificial text detectors, and we hope it will lead to advancements in AI detection that will benefit the public and the AI industry, helping to resolve issues of mistrust and misuse of AI.

3. Enhancing AI’s Global Impact

While the global impact of AI is not always considered part of responsible development, it’s an element of fairness that deserves attention. Large communities of people who speak low-resource languages have not benefited equally from AI advancements because their languages were not represented in AI training data. Current efforts are making AI more inclusive and accessible, especially by supporting low-resource languages that have traditionally been overlooked, like a recent dataset developed for Swahili. Integrating these languages into AI development and creating multilingual AI models are essential steps toward global inclusivity.

4. Acknowledging The Humans Behind AI

Responsible AI development acknowledges the data workers who provide human insight for training and evaluating models.

Large language models, in particular, demand human-generated data that demonstrates in-depth knowledge in specialized contexts. To reduce bias in datasets, it’s important to collect data from experts and specialists across diverse backgrounds and fields. Data providers can expand the knowledge domains covered by their teams for the strongest impact on model safety.

The well-being of these experts should always be our first priority. The top AI data providers offer remote options for professionals worldwide to earn extra income and share their expertise to shape future AI products. We use automated technologies to improve the experience of experts: ensure fair pay, limit their exposure to harmful content, reduce routine work and ambiguity and provide flexibility.

Shaping A Responsible AI Future

As AI technology evolves rapidly, the stakes for responsible and ethical development have never been higher. The future of AI hinges on our collective commitment to building systems that are transparent, fair and inclusive. This is not just an ideal but an urgent necessity, one that will shape the societal landscape for generations to come.

The responsibility lies with all stakeholders, from developers and researchers to policymakers and the broader community. By prioritizing rigorous evaluation, fostering transparency and ensuring global inclusivity, we can pave the way for AI that genuinely serves humanity. The future of AI is not just in the hands of machines; it’s in ours. Let’s shape it wisely.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share.

Leave A Reply

Exit mobile version