Currently, the performance of AI systems is predominantly—and, in some cases, exclusively—developed and praised for their capacity to enable autonomous operations, often at the expense of fostering collaboration among humans, thereby not augmenting but diminishing human intelligence.

This is because most AI systems neglect or overlook three critical dimensions—not of artificial intelligence, but of artificial integrity—necessary to ensure their outcomes uphold human values and societal benefits:

  • The “inner” (the system’s internal mechanisms, such as transparency, accountability, and decision-making integrity), and,
  • The “outer” (the system’s external impact on societal structures, ecosystems, and shared resources),
  • The “inter” (-relations, -mediations, -dependencies, -connectedness and -actions between the AI system, humans, and society).

Let’s drive home the importance of designing AI systems with artificial integrity, therefore considering the “inner”, the “outer” and the “inter” toward a human-centered approach.

A story about the “inner”

A few days ago, Stefano, an entrepreneur managing his own online business, contacted the customer support team of an online retailer through their AI-powered chatbot. He had an issue with a product he had received, and within minutes, the chatbot identified the problem and provided a quick resolution. Stefano was impressed by the efficiency—it saved him valuable time compared to waiting for a human representative.

However, as he scrolled through the chat interface, Stefano noticed a small icon that showed a queue of other customers waiting for assistance. Curious, he clicked on it and discovered that the chatbot prioritized “premium” customers, like himself, over regular users. It dawned on Stefano that his fast resolution came at the expense of others, who might have been dealing with equally urgent issues but were pushed further down the queue.

The system had been designed to optimize for speed and efficiency for certain users, without addressing the broader implications for fairness. While Stefano appreciated the convenience of his experience, he couldn’t shake the feeling that the system created an imbalance. It was lacking an inner voice that could enable the system to calibrate for fairness and inclusivity. It left him wondering whether technology like this could be reimagined to ensure that speed and efficiency weren’t achieved at the cost of equity among users.

A story about the “outer”

Last winter, Ana, a mother of two, installed a smart thermostat in her apartment to save on energy costs and ensure her home stayed comfortable for her children. The device used AI to learn her family’s habits and automatically adjusted the temperature based on their preferences and daily schedule. Within a few weeks, it was perfectly calibrated, keeping their home warm and cozy when needed and conserving energy when they were out.

One cold evening, as Ana stepped out of her apartment, she caught a glimpse of her neighbors bundled in coats and blankets through their door, which had just opened as their eldest stepped out. They explained they were cutting back on heating to save money due to rising costs.

A few days later, Ana learned that the building’s AI energy system optimized heating efficiency but ignored individual circumstances. While it worked well with her smart thermostat, which provided accurate usage data, her neighbors, manually lowering their heat to save money, sent misleading signals. The AI misinterpreted this as a preference for colder temperatures, eventually making it harder for them to get adequate heat when needed.

Even worse, it didn’t account for the fact that in freezing temperatures, inadequate heating could become a serious safety issue, especially for vulnerable individuals or families. As Ana reflected, she found herself wishing there were a way for her smart thermostat to communicate her willingness to share excess energy savings with her neighbors, to foster a sense of caring for the outer community and support those in need. Despite living in the same building, managed by a single energy system, the AI lacked the capability to foster collaboration or provide solutions to support it.

A story about the “inter”

Last weekend, John, a young student, used a self-checkout kiosk at his local grocery store. The system was equipped with advanced image recognition and AI to streamline the checkout process. It scanned items quickly and even alerted John to discounts on products he’d picked up. He was impressed by how seamlessly the technology worked to speed up his shopping experience.

As he completed his purchase, John noticed a family at the next kiosk struggling with the system. The AI didn’t recognize an item they were trying to scan, prompting repeated errors. A long line started forming behind them, and the store employee tasked with assisting was already helping someone else.

Not only that but it wasn’t until John was packing his bags that he realized his own role in the situation. He hadn’t stepped in to offer assistance or advocated for the family to receive help sooner. By letting the system handle everything, being guided by its behavior of focusing solely on his needs, John had inadvertently ignored his potential for empathetic interaction with others, which could have led him to offer help to the family in the kiosk right next to him.

Artificial Integrity is AI’s “inner”, “outer” and “inter”

AI, when designed predominantly or solely to prioritize efficiency for individual purposes, can annihilate an individual’s ability to think and act for or with others, not only creating inequities or misunderstanding diverse user needs, but also undermining our capacity to help, care, and collaborate with others, eroding our potential to be human at our best and thereby diminishing, rather than augmenting, our humanity.

Autonomy can be deterrent for collaboration. It implies that when individuals are using an AI system that are highly autonomous and narrowly focused on serving their needs, it can create a bubble where they become powerfully self-sufficient and thereby might be less likely to seek out others for establishing collaborative efforts.

Reflecting on the stories of Stefano, Ana and John, we could say that the quest for autonomy for oneself, deepening a gap in consideration for others, is part of human behavior and has existed long before the developments of AI.

This is precisely the purpose of AI that enhances humanity: to amplify our humanity, not to atrophy it or distance us from it.

Artificial integrity ensures that AI systems are not blindly focused on efficiency for the sake of a given individual purpose but are also attuned to the ethical, moral, and social dimensions of their impact, thus considering broader interests, including those of others.

To this end, AI systems’ artificial integrity should be examined through critical questions such as:

What internal mechanisms are in place within the AI system to:

  • Ensure integrity in its decision-making processes with regards to human values, such as equity, fairness? Theinner”.
  • Consider the broader environment, including societal structures, cultural norms, and shared resources? The outer”.
  • Mediate and foster interactions between humans and society, fostering relationships that prioritize empathy, collaboration, and inclusivity, augmenting not diminishing human agency, autonomy, and social dependence? Theinter”.

That’s what sets an AI system’s functioning with artificial integrity, over intelligence.

Share.

Leave A Reply

Exit mobile version