If AI can do parts of your job with greater efficiency and wisdom, how does that make you feel about your own self-worth? If AI users suffer a reduction in their dignity, how will this influence their continued use and productivity with AI? And how could managers using AI and companies deploying AI counteract such a decline in user dignity? On the flip side, if AI increases a user’s sense of self-worth, how can AI companies harness this feeling to improve user productivity and convince non-users to consider using AI?
The rise of ubiquitously accessible artificial intelligence has scared many observers – inside and outside of the companies who create and use the technology – for its potential to replace and perhaps even subjugate human intellect. AI optimists, on the other hand, believe that AI will free human beings to be more creative, inclusive, wise, and free. What these two extremes share is the belief that AI will impact individual dignity, defined as a person’s belief in their own self-worth, along with several other adjacent aspects of identity like workplace dignity and self-efficacy. These self-perceptions in turn drive people’s perceptions of the benefits and consequences of AI.
Along with my colleagues Amanda Nimon-Peters and Katarzyna Bachnik at the Hult International Business School (I’m on the San Francisco campus, Amanda works in Dubai, and Kate works in Boston), we are embarking on this research project because we both believe in the power of AI and the vital importance of human dignity. What issues do you think we should consider?