Beena Ammanath – Global Deloitte AI Institute Leader, Founder of Humans For AI and Author of “Trustworthy AI” and “Zero Latency Leadership”
In this era of humans working with machines, being an effective leader with AI takes a range of skills and activities. Throughout this series, I’m providing an incisive roadmap for leadership in the age of AI, and an important part of leading effectively today is making sure your people are at the center of decision-making.
One of the most important components of value-driving artificial intelligence (AI) programs is not found in the machine; it is found in the human workforce.
AI models do not spring from the technology either fully formed and ready to deploy. They are conceived, shaped and managed by humans. Within a business context, human stakeholders make fundamental decisions on what AI to build, how to build and use it and how to manage the risks.
Part of the challenge is developing models and use cases that are equally accessible, valuable, trustworthy and compliant for end users across demographics and geographies. I believe that achieving this at scale takes diversity.
A greater diversity of decision-makers can raise a plethora of ideas and considerations that support AI governance, risk mitigation and ultimately, business value. I urge you to consider the ways in which a more diverse group of AI stakeholders can impact each stage of the AI lifecycle.
Conception And Development
When identifying a valuable use case, there are decisions to be made at the outset as to its trustworthy use and outcomes. Who will be impacted by the use case and what questions does that raise around things like fairness, privacy, safety and accountability?
These are important questions for the organization because the answers directly influence governance and compliance. Asking the right questions is essential, and a greater diversity of stakeholders supports a more holistic approach to deciding which use cases to pursue and how best to do so.
It’s important to recognize that stakeholders are not restricted to data scientists, AI engineers and other professionals engaged in an AI model’s technical development. I believe that just as important are project managers, line of business users, quality assurance professionals, executives and people from across the organization.
When stakeholders provide perspectives and concerns informed by their nationality, language, cultural norms and all the things that make them unique, this can provide the input and feedback that helps supports ethical, trustworthy decision-making during model conception and development.
One of the greatest advantages of AI is its ability to scale. Gains in efficiency, capacity and predictive recommendations can add up to significant value in the aggregate. Yet, the risks compound as well, and for AI to scale in a valuable way, it needs to work equally well for all humans. Inherently, that requires a diversity of people contributing to model management.
This is not just a concern over how well the application functions; it can have a bottom-line impact. If a call center chatbot presents greater accuracy and capabilities for users in one region over another, by extension, some market segments will receive lower quality customer service or engagement, which in turn can hamper sales and customer satisfaction.
A diversity of people with varied backgrounds is positioned to anticipate and flag these kinds of issues surrounding AI deployment. It is unrealistic to assume that the technical professionals who build and run AI models will foresee and mitigate every possible eventuality and ethical issue after deployment. Instead, it takes creative thinking from across the business, with regular opportunities and waypoints for stakeholder input.
Evaluating whether a model is performing as intended means considering not just the value delivered but also the risks, both expected and surprising. The insights gained through model assessment can be fed back into the development lifecycle to enable continuous improvement and consistent attention to compliance issues.
The diversity of people contributing to this effort helps the assessment process. Are unexpected risks emerging in the application? Is the model performing equally well for all stakeholders? What new rules or laws impact the deployment? How is the model driving business value? How can it be improved? These are questions that require input from multiple professional roles, and the more diverse the humans are, the more diverse and comprehensive their assessments and contributions can be.
Alongside these considerations for business value, scale and risk management, there are ethical reasons to promote diversity in AI programs. Many organizations are working toward diversity, equity, and inclusion (DEI) goals, and just as diversity can help improve AI programs and governance, it can also percolate and drive value into other areas of the business.
There is also an opportunity to use AI to reach and help more people. Particularly with the advent of generative AI and large language models (LLMs) that can output coherent, culturally specific language, businesses have an opportunity to reach and impact people who might otherwise be ignored or left behind.
Ethically, this could give more people more access to opportunities in areas such as financial inclusion, education irrespective of location and public services tailored to varying demographic needs and concerns.
As AI continues to mature, its power and sophistication will only raise the stakes for risk mitigation and business competitiveness. A diverse team with a variety of personal and professional backgrounds can help the organization on its path to not only developing trustworthy AI but also growing your business’ reach and potential.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?