We spend our time clicking for much of our lives:

To read the news, novels, or comics—to write emails, documents, maintain a blog, post on social media—to watch videos or view photos—to subscribe to newsletters, subscriptions, and services—to enroll in online courses, discussion forums, dating services, or online clubs or associations—to buy clothes, electronics, groceries, or plan a trip—to express ourselves, by commenting on blogs, sharing opinions, chatting on messaging apps like WhatsApp, voting in online polls, or just voting in elections.

Each of these clicks, while seemingly benign, collectively feeds into a vast digital ecosystem in which our preferences and behaviors are not only recorded, but analyzed and used to influence our decisions at an unprecedented scale.

Social networks have demonstrated the potential to manipulate human behavior through digital behavior. Based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests, researchers from the University of Cambridge have shown that private traits and attributes can be predicted from digital records of human behavior.

While it’s clear that our digital interactions can be easily manipulated, the underlying technologies driving these processes reveal even deeper implications for personal autonomy and societal norms.

Manipulation is often not a hazard or accident, but an intended result of product design. As outlined in an article published in the Wall Street Journal, it is fair to acknowledge that getting us to click on one place rather than another, at one moment rather than another, for a particular reason rather than another, is an art, especially mastered by the Tech Giants.

With the advancement of AI through the vast spread of GenAI, marketing science has never been more equipped to predict behavior, blurring the line with manipulation and questioning the notion of cognitive liberty considering the factors of manipulation from AI, which can bypass human rational defenses and the ability to reject that influence.

As AI learns from data, it learns from our digital behavioral data, thereby computationally introspecting our intimate human habits inherited and incorporated into anything we click for, opening the path to decoding vulnerabilities in human choice processes.

Key studies have not only identified these patterns but also quantified their impact on our decision-making processes.

Researchers have developed a general framework for creating adversaries that manipulate human decision-making using advanced AI techniques like deep reinforcement learning and recurrent neural networks. Their study demonstrated the effectiveness of their framework in three different decision-making tasks, highlighting its potential to shape human behavior.

A research paper from the MIT Initiative on the Digital Economy has also examined how AI can learn and manipulate human preferences and behavior. The paper suggests that human oversight may not be sufficient to counter AI’s ability to learn and exploit individual habits and biases, leading to potentially problematic outcomes.

These research findings underscore the potential of AI to shape human behavior subtly, yet powerfully.

As humans are just humans, keeping the human in the loop is therefore not a guarantee that AI won’t make decisions on behalf of human judgment or annihilate free will, creating a sort of artificial grip. AI with humans in the loop is a necessary condition for decisions not solely made by AI, but it is largely insufficient, as human decisions are susceptible to influence, manipulation, and can be intentionally constrained.

How empowered is the human in the loop to make decisions? What decision options are provided to the human for their decisions to be made? Which data are prevalent to the construction of these options built to inform their decision? Is the human well-trained or well-capable to make the decisions, for example, considering age? In other words, human in the loop is not a panacea.Given that this year is not merely an election year but potentially a defining one, with national elections scheduled or expected in at least 64 countries representing almost half the global population, our clicks are worth much more than the same click before November 30, 2022, the anniversary of ChatGPT’s launch. They have turned into a window that opens as many possibilities as vulnerabilities, allowing hacking into the cognitive systems of citizens all over the world and becoming the Achilles’ heel of our democracies.

In this context, AI not only offers significant opportunities to enhance democracy—by increasing civic engagement, personalizing information for voters, streamlining governmental operations in handling routine tasks and processing data quickly, improving public administration’s efficiency and responsiveness, assisting in forming well-informed, relevant policies, and combating misinformation by detecting and flagging false information—but also presents challenges.An experiment showed that language models have potentially distorted governmental responsiveness to constituent concerns by enabling misinformation.

In democracies like India, Spain, Mexico, AI-generated disinformation has been used to impersonate politicians and spread falsehoods, exploiting voters’ trust in their leaders.

In the US, recent developments revealing that the Republican National Committee (RNC) used an AI-generated video to criticize Joe Biden highlight the growing influence of AI in altering the dynamics of the forthcoming elections.

In our clickbait era, recognizing the “supervasive” influence of AI on our decisions leads us to question the safeguards necessary to protect our cognitive freedoms.

As digital data increasingly takes on the value of currency, the need for a corresponding data regulatory system becomes apparent—much like the financial regulatory systems that control and monitor financial transactions nationally and globally.

Such a data regulatory system would require explicit authorization from data subjects before their information is utilized, mirroring financial monitoring frameworks. Each instance of data use would trigger a detailed report, providing the data subject with complete transparency regarding when, how, and by whom their data is accessed.

Moreover, this system would include robust mechanisms to identify and report unauthorized or suspicious activities, thereby empowering users to monitor who is accessing their data and for what purposes.

Establishing this ambition holds great promise and would present great challenges.

Beyond being a technological challenge, governing and regulating digital data represents a significant, if not the most significant, democratic challenge of our time, requiring global cooperation to overcome.

But in an age where our every click will be more and more influenced by AI, implementing such infrastructural enhancements is crucial for safeguarding digital privacy and human agency.

Addressing these hurdles would represent a major step forward, crucial for our societies.

AI can help achieve this breakthrough, but only if it is not merely reduced to an instrument for producing more intelligence for the sake of intelligence. Instead, it should be used for greater integrity, serving the broader needs of society—a concept I’ve coined ‘Artificial Integrity’—to preserve and uphold human values, with every click.

This question remains not just what AI can do, but what it should do.

The answer lies in our hands as much as it does in the algorithms that govern our digital lives, thus our very lives.

Share.

Leave A Reply

Exit mobile version