Integrating artificial intelligence into their products and services can help tech companies build tailored solutions with enhanced capabilities. However, AI can come with serious drawbacks—including biases and user privacy concerns—if teams don’t follow responsible development practices.

It’s crucial for tech companies to prioritize user well-being and ethical considerations when building or leveraging AI systems. Below, 20 members of Forbes Technology Council share strategies for creating AI solutions that empower users while respecting their privacy and values.

1. Adopt A ‘Responsible AI’ Framework

In the rapidly evolving landscape of AI-powered products and services, one practical strategy that stands paramount is the adoption of a “Responsible AI” framework. This approach emphasizes prioritizing user well-being and ethical considerations from the outset, ensuring that these critical aspects are not afterthoughts but foundational elements of the design and development process. – Josh Scriven, Neudesic

2. Consider Using Specialized LLM Models

While large language models are extremely capable and have now almost become synonymous with AI, the fact that they’re trained over huge amounts of data makes their behavior less predictable. Depending on the product or service in question, it may make more sense to use specialized models that are trained on much smaller datasets with more predictable behavior. – Avi Shua, Orca Security

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

3. Leverage AI Code Generation Tools To Scrutinize Devs’ Code

Application security is currently racing toward a head-on collision with the rapid rise of AI—in particular, generative AI. By collaborating with LLMs such as ChatGPT, we empower developers to securely leverage AI code-generation tools to scrutinize their own generated code. This proactive approach helps in identifying potential vulnerabilities, especially in code sourced from open-source materials. – Sandeep Johri, Checkmarx

4. Begin With User-Centered Design And XAI Principles

A couple of important thoughts: First, integrate user-centered design principles and ethical considerations from the very beginning of the development process, not as an afterthought. Second, in product development, use explainable AI (a.k.a. XAI) techniques to provide users with a basic understanding of how the AI system arrives at its decisions. This builds trust and helps users understand the reasoning behind AI. – Erum Manzoor, Citigroup

5. Have A Human Review All AI Decisions

We do not have a general AI, and self-supervision is not at a level where we can leave machines to manage themselves. A human needs to review all of an AI system’s decisions and use the data to iteratively improve the models, keeping users’ well-being and ethical considerations in mind. A step further is to build mechanisms that identify where AI is likely to display harmful bias and involve humans in those processes. – Kaarel Kotkas, Veriff

6. Keep Transparency And Explainability At The Core Of Design

Transparency and explainability have to be at the core of product design for any AI-powered solutions. Transparency builds confidence among stakeholders, and explainability improves the understanding of the reasoning behind AI’s recommendations or actions. Transparent systems are better equipped for bias detection and mitigation and are able to easily support any compliance audits required by your industry. – Shailaja Shankar, Cisco

7. Establish Clear Data Management And Training Processes

A steadfast commitment to privacy and transparency is critical when building AI-powered products. Clear processes for handling user data, collecting feedback and training your generative model, as well as transparent disclosure of how your AI works, are essential to establishing trust with your users. And that trust is vital to driving the adoption of your product and services. – Oz Alon, HoneyBook

8. Consider These Five Key Factors

Five key ethical factors must be considered when designing and implementing AI products and services. These include responsible sourcing of unbiased datasets, accountability with human oversight, bias mitigation in AI models, transparency regarding how systems are being used, and the collective integration of all these principles during the product life cycle and beyond. – Alan O’Herlihy, Everseen

9. Operationalize Responsibility From The Design Through Post-Sales Stages

Operationalize responsible AI development from the design stage all the way through the post-sales and/or customer success stages. This way, most risks can be mitigated as part of the regular product development process, and even if things go wrong post-launch, it is easier to manage escalations. – Didem Un Ates, Goldman Sachs

10. Establish A Governing Body

Don’t leave these discussions to the developers: Ethical AI is leadership’s responsibility. My advice is to establish a governing body that is tasked with managing user well-being and ethical questions. They should develop decision frameworks for developers to apply. – Glen Robinson, Platform One

11. Embed User Well-Being And Ethics In Design Sprints

When building AI products, embed user well-being and ethics in design sprints. Brainstorm potential risks and mitigation strategies alongside core functionalities. Prioritize solutions that benefit both users and society. Regular reviews and user feedback loops help maintain ethical standards throughout development. – Sergey Mashchenko, Light IT Global Limited

12. Ensure The Tool Has A Real Use Case

Does this AI tool truly solve a user’s challenge, problem or requirement? When offering a service to a user, AI can feel cold and somewhat clinical in its approach. Is there a clear path to a use case that will provide a satisfactory outcome for the user? Consider and account for as many user stories as possible. – Arran Stewart, Job.com

13. Have Testers Try To ‘Break The System’

We test and benchmark pretty much every new AI model or model change at Integrail.ai, and we have found that testing is absolutely key. You have to have a number of predefined cases of people trying to “break the system,” and you need to run them every time you make a change to your AI multi-agent. – Anton Antich, Integrail

14. Implement User Feedback Loops

A key AI strategy is to implement robust user feedback loops. By incorporating user feedback throughout the design and development process, tech teams can ensure that their AI-powered products align with users’ values and prioritize users’ well-being. Additionally, establishing multidisciplinary teams that include ethicists and social scientists can help organizations identify and address potential ethical considerations early on. – Ankur Pal, Aplazo

15. Prioritize Peer Review And Interdepartmental Checks And Balances

Tech teams need to establish clear standards and commit to ongoing oversight of the AI models driving their products. They should prioritize intrateam peer review and interdepartmental checks and balances, such as change control boards. In addition, they should provide regular release notes to communicate evolving features and changes to internal and external recipients. – Kempton Presley, AdhereHealth

16. Implement Data De-Identification

To prioritize user well-being in AI design, it is essential to implement data de-identification techniques. Removing personal identifiers through methods such as pseudonymization and anonymization protects privacy, ensures compliance with data protection laws and builds trust. Regular updates to these methods are crucial so that you continuously adapt to technological advancements. – Hashim Hayat, Walturn

17. Leverage These Three Strategies

There are three practical ways to prioritize user well-being and ethical considerations when designing AI tools. 1. Prioritize having a diverse team of humans who provide feedback to the rewards training model. 2. Draw clear boundaries in terms of the questions that the AI shouldn’t be answering versus those that it should, and default to escalation to humans if the AI isn’t sure. 3. Build continuous feedback loops to enable users to provide feedback on the tool’s outputs. – Pranav Kashyap, Central

18. Use The ‘FIRST’ Framework

I would recommend using the “FIRST” framework for AI. This includes feedback (F) mechanisms for user issues; integrity (I) in ethical training, with the inclusion of diverse data; regular (R) ethical reviews; stakeholder (S) inclusion from the start; and transparency (T) about data use and compliance. – Viplav Valluri, Nuronics Corp.

19. Maintain And Regularly Review A Results Log

Maintain a log of the AI tool’s results and periodically review it. Following the dictum of “failing on the way to success,” holding a post-mortem on the outcomes of the AI tool will reveal areas that need fixing or retuning. Conducting these reviews as a team is even better and will highlight the gravity of “getting it right.” – Henri Isenberg, ReviewInc

20. Allow Users To Control Their Data

Give users transparency and choice when it comes to exactly how their data is being stored and collected. This should include the ability to opt out of certain features or data-sharing requirements. – JJ Tang, Rootly

Share.

Leave A Reply

Exit mobile version