Aditya Malik, Founder and CEO of Valuematrix.ai. Nasscom Deeptech and CII Mentor associated with many startups in the AI and SaaS space.
After navigating several twists and turns, artificial intelligence (AI) is evolving into a transformative power, traversing the path of machine learning, natural language processing and multi-modal information (text, speech, image) through predictive, generative and autonomous agentic AI.
Throughout the year, generative AI (GenAI) dominated the scenario and we heard about the political biases of “woke AI” and the political leanings of “anti-woke AI.” That led to three learnings:
• Training data can be manipulated to guide AI in a certain direction
• AI–large learning models (LLMs) in particular—may be trained on data already contaminated by AI-generated content
• The Internet, while replete with biases and perspectives, also has data voids, and GenAI fills in the blanks/hallucinates/or fabricates information
While chatbots, virtual assistants and conversational AI make some workers less relevant, the anomalies in LLMs saw developers designing the human-in-the-loop model. The importance of human interaction, especially in sectors such as healthcare where the risk is high, saw people reskilling themselves to interact with algorithms.
AI is not merely a technological marvel but also a catalyst for developing new skills and capabilities in humans. Far from robbing people of their jobs, we saw AI assistants and agents help people accomplish more by augmenting their abilities and expanding workforce capabilities. For the AI-savvy enterprise, it was no longer about improving time and cost-saving metrics, but the potential for increasing revenues by deploying collaborative AI and changing the course of debate from replacement to partnership.
Human-Centered AI Design
Then came the question of innovation, insight and creativity. Humans got actively involved in overlaying the algorithm with experience, human intelligence and professional identity. For example, a chief human resource officer (CHRO) checked for bias in AI-assisted recruitment processes, a doctor drafted personalized treatment plans based on AI-analyzed patient data, and a stylist created a new fashion lines using AI feedback from social media data.
These were all instances of human-centered AI: building a hyper-personalized offering on an AI-generated base layer, to facilitate superior decision-making, product or process hinged on human values, needs, expertise, intuition and intelligence.
In other words, efforts to ensure fairness, transparency, accountability and privacy gathered steam.
This led to the need for ethical AI, prompting AI developers to collaborate with ethicists to balance innovation with responsibility.
Even so, controversies erupted. A paper on disability representation in text-to-image AI models, including DALL-E 2, Midjourney and Stable Diffusion 1.5 revealed:
• They overwhelmingly produced images of people using wheelchairs, perpetuating the misconception that almost all people with disabilities use wheelchairs
• Most of the images depicted people who looked sad, lonely or in pain—stationary and upset, with no smiles
• Some AI-generated images were dehumanizing because they cropped out humans to show the assistive technologies
Such stereotypes limited opportunities for the disabled in education, employment and social interactions by shaping negative perceptions about their capabilities. A diverse range of disabilities, races, genders and ages in the images may have represented the reality better.
The Onus Of Algorithmic Accountability
To curb unethical practices in AI model training, there was a move to humanize AI—that is to create AI systems that emulate human skills such as empathy and mindfulness. Though these solutions typically supported senior citizens, the disabled and the infirm, people viewed them with suspicion as they could curtail human connections.
Next, the concept of human-centric AI grew to make AI models go beyond commercial considerations to address socio-economic considerations, such as ensuring equitable distribution of opportunities, resources and benefits. However, the developer community faced flak when AI solutions seemed to exacerbate existing inequalities. Hence, the need for explainable AI, or XAI, arose and a set of techniques was developed to enable the affected to understand, challenge or change outcomes that emanated from the “black box” of AI models.
Going forward, who will take responsibility for delivering on the spiraling expectations from new and emerging technologies? Those who design, develop and deploy AI are accountable for the outcomes.
In the coming days, an AI developer will need to hone the skills required to monitor algorithmic output, learn to apply critical thinking to enhance it and measure the impact of human intervention.
It will be the joint responsibility of the organizational leadership, software developers and policymakers to regulate AI algorithms, agents and systems to operate without harming society. Ethical AI certification will vouch for sincere efforts.
What Can We Expect In 2025?
Action on the regulatory front will pick up speed. Though there is no comprehensive AI law in the US, there are more than a hundred bills seeking to prescribe voluntary guidelines and best practices, some state-level laws and an extension of existing laws on privacy and automated systems to include AI.
AI has progressed from copilot mode to autopilot mode i.e. AI agents can make autonomous decisions and take action, with little or no human intervention.
PwC has predicted that digital workers will double the workforce across organizations that embrace a “service-as-a-software” approach, paying only for the specific outcomes achieved by specific AI agents. Organizations may employ several such agents to help with legal document review, automated CRM management, cybersecurity testing, etc. While use cases proliferate across industries such as manufacturing, healthcare, retail, education, energy, transport, media, entertainment, government and the public sector, a holistic “Responsible AI” strategy will help in the successful deployment of agents, tools, platforms and solutions.
As a serial entrepreneur working in Deep Tech, I foresee AI agents participating in cross-functional teams, coordinating and contextualizing, sometimes collaborating with humans and always providing unique data-intensive insights.
So update your workforce with both data and AI skills and take protective measures against shadow AI.
Shadow AI or the unsanctioned use of AI by employees, may increase productivity but makes an organization vulnerable to security threats, privacy exposure, IP violations, financial loss and reputation damage. Remember, the AI gold rush has attracted ravenous consumption and involves both defenders and attackers. As researchers, startups and tech giants jump into the AI arms race, it may be better to prioritize safety over growth.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?