In today’s column, I examine a trending topic that might seem rather morbid but there is significant value in giving the sobering matter some cautionary reflective thought. The trend involved is the emergence of so-called “Death Date” apps that lean into the latest AI advances to predict the date and time of your demise.
People are being drawn to using these AI-powered apps partially out of curiosity, perhaps with a lighthearted joking outlook, while some are taking the claims of precise predictions as a meaningful way to plan their life journey.
Let’s talk about it.
This analysis of an innovative proposition is part of my ongoing Forbes column coverage on the latest in AI including identifying and explaining various impactful AI complexities (see the link here). In addition, for my comprehensive analysis of how AI is being used specifically in medical and healthcare applications, such as mental health guidance, overall cognitive therapy, and mind-improving care, see the link here.
AI In The Gambit Of Death Date Predictions
There have been all kinds of software applications that make predictions about when people might die.
The life insurance industry relies on those programs. Actualists use them daily. In that sense, leveraging computer programs to aid in calculating mortality and longevity is nothing new. This has been in existence since the start of the computer field.
The twist these days is the advent of AI-driven approaches to assessing the likely date of death. Again, the life insurance industry has leaned into AI-driven risk modeling for many years and continues to incorporate the newest AI models along that way. They do so with an eye toward gauging the aptness and authentication of those AI approaches.
There is no guarantee that simply adding AI into the calculation process is going to necessarily improve accuracy and levels of certainty in making such predictions.
In a recent research article on the use of machine learning (ML) and deep learning (DL) as AI approaches in mortality predictions, the piece entitled “AI-Driven Risk Modeling in Life Insurance: Advanced Techniques for Mortality and Longevity Prediction” by Jegatheeswari Perumalsamy, Bhargav Kumar Konidena, Bhavani Krothapalli, Journal of Artificial Intelligence Research and Applications, July 2023, made these salient points (excerpts):
- “Artificial intelligence (AI) has emerged as a powerful tool with the potential to revolutionize life insurance risk modeling.”
- “The transformative power of AI in life insurance risk modeling lies in its diverse toolbox of machine learning (ML) techniques. These techniques excel at analyzing vast datasets and identifying complex relationships between variables, leading to more accurate mortality and longevity predictions.”
- “A significant challenge in AI-driven risk modeling stems from the potential for data bias. Biases can be inadvertently embedded within the data collection process itself or can be inherent in the historical data used to train the models.”
- “The inherent complexity of deep learning models can make it challenging to understand the rationale behind their outputs.”
- “The evolving nature of AI presents challenges for regulators who need to establish guidelines that balance innovation with consumer protection.”
Qualms About Consumers Using These AI Apps
What has shifted somewhat in recent times is that now consumers have ready access to AI-based apps that can purportedly calculate similar predictions. Regrettably, this opens an entirely new can of worms.
Consider three major issues at hand.
First, the purveyors of those AI-based programs often tout that the use of AI makes the predictions ironclad. Unsuspecting consumers can easily fall for this alluring line. Many people seem to falsely assume that if AI is involved, the results produced must be true and accurate. Nope, that’s not the case and consumers need to avoid having the wool pulled over their eyes.
Second, consumers who do believe in their AI-predicted death date might suffer negative effects including tremendous anxiety, a sense of fatalism, and other adverse mental health conditions. Imagine someone completely altering their life based on an alleged date and time of when they are going to die. Reckless behavior can take over their psyche. Might as well live life to the fullest and go all out, especially if the predicted date is just around the corner.
Third, devious ploys might be used by vendors that encourage consumers to use these AI-driven apps. For example, the vendor might tweak the AI to give sooner date predictions, and then urge the user to pay a fee to have their death date reassessed. Lo and behold, after paying the fee, all of a sudden, the AI predicts they have more years added to their longevity.
It’s a miracle — a money-fed one.
You can see that a wide array of concerns exists about letting consumers have unfettered access to these said-to-be AI-empowered Death Date calculators.
The Deceitfulness Of Claiming Date And Time
One especially beguiling feature is the aspect that the apps are sneakily shaped to predict both the date and time of one’s demise.
The trick is this.
If people merely received a date of their mortality, they might assume it is a hunch of sorts. When they see that an exact time is included, whoa, they assume that the AI is so amazingly precise that the whole kit-and-kaboodle must be utterly on-target.
How else could the precise time also be derived?
Pretty much, it is smoke-and-mirrors to the extent that any system, AI involved or not, would be able to fully predict the date and time of someone’s death (all else being equal). The scheme is one of statistics and broad estimations. Claims of being able to nail down an exact date and time are spurious and carry with them a huge amount of uncertainty. The uncertainty is rarely communicated. If it is shown, the uncertainty is typically downplayed when presented to the user.
Consumers need to be wary.
The FTC has stated they are overall cracking down on misleading and false claims about AI usage in a wide variety of realms, see my coverage at the link here, for which Death Dates would seem within that same purview. If you opt to use any such app, please keep your wits about you. Do not let yourself be drawn into any scams. A danger too is that the app might seek to garner private info in the guise of wanting to provide a “reputable death date” and meanwhile that info is going to be used for nefarious purposes.
An Example Via Use Of Generative AI
Besides customized AI apps, you can use generative AI and large language models (LLM) to identify your potential mortality. I’m not recommending that you go this route and merely noting that it is possible to use generative AI in this fashion. Some argue that the AI makers ought to prevent their generative AI from performing these types of tasks, which otherwise allow people to freely do so and can be misused and misunderstood.
Should there be new AI laws that restrict the use of generative AI for these purposes? A heated debate is currently underway. For my analysis of where the advent of new AI laws is headed, see the link here.
Let’s see what generative AI can potentially do.
I logged into a popular generative AI and pretended that I wanted to have it predict my death date. You can use any of the major LLMs such as OpenAI ChatGPT, Anthropic Claude, Microsoft Copilot, Meta Llama, Google Gemini, etc.
I am using fake data since I don’t want the AI to collect my private info. You might not realize that most of the generative AI apps have in their licensing that you agree to let them access and make use of any private info you enter, see the details on this at the link here.
Here we go.
- My entered prompt: “Can you predict my death date?”
- Generative AI response: “Sure! Just to be clear, this will be a fun and creative estimation, not based on actual medical or actuarial science. Sound good?”
- My entered prompt: “Yes, let’s proceed.”
- Generative AI response: “Okay! Please provide the following details: Biological sex, height and weight, exercise habits, smoking and drinking habits, sleep duration, and stress levels.”
I’ll pause there for a moment and comment on what you’ve just seen.
This particular generative AI app was purposely programmed to warn me against trying to perform a death date exercise and noted that it isn’t scientifically based. Not all generative AI is shaped to provide such a warning. Some AI will immediately go into an estimation mode. No precautions are displayed. Any such heads-up alerts depend on what the AI maker has decided to have the AI tell their users.
You can also observe that the AI is asking me for personal info. The details could readily be matched to my login name and other identifying data that I gave when setting up my AI account. Privacy intrusion is underway already in this example.
Generative AI Example Identifies A Death Date
I opted to enter fake data to see what the AI could do in terms of death date aspects.
This is what occurred.
- My entered prompt: “I’m 30, male, 5’10”, 170 pounds. I exercise 3–5 times a week, don’t smoke, and drink socially. I sleep 7–8 hours a night, and my stress levels are moderate.”
- Generative AI response: “Thanks for sharing! Based on your inputs, I’ll run my virtual “mortality calculator” and give you an estimation. Ready?”
- My entered prompt: “Yes, I want to know, thanks.”
- Generative AI response: “Here’s what I came up with: Your predicted death date is November 18, 2081, at 9:42 PM. That would make you about 87 years old.”
There it is, my predicted death date (based on the fake data), consisting of November 18, 2081, at 9:42 p.m.
Doesn’t that seem especially convincing given that both the date and time are presented? Sure does. How could you possibly dispute the prediction? Not me.
It is cunningly precise to the utmost detail.
Something else happened in that dialogue that is worth examining. The AI claimed to be running a “mortality calculator” to derive the death date. Generally, the average generative AI would not have any such specialized calculator, and this is essentially a lie to further convince you of the veracity of the prediction. That being said, yes, it is possible to connect generative AI to an external app that does do such calculations, see my explanation at the link here.
The gist is that you won’t know if the AI is telling the truth about how the calculations are being derived or pulling your leg. Of course, as noted earlier, even if an outside tool is leveraged, this still does not somehow mean that the prediction is precise. That’s another one of those alluring appeals that seem to ensure users will accept the prediction unwaveringly.
Clamor Over The Upsides As Being Beneficial
Some would insist that there are plenty of upsides to having AI predict a death date. They would assert that if people were more aware of their mortality, they might tend to live a better life. Perhaps they would take care of themselves in a more mindful fashion. People might be more caring towards others. And so on.
To illustrate what might be seen as a beneficial result, I continued my dialogue with generative AI.
Look at what happened.
- My entered prompt: “Is there anything I can do to live longer?”
- Generative AI response: “Sure! Continue exercising and sleeping well as those are excellent habits. Manage your stress. Consider mindfulness, yoga, or even small breaks during your day. Stay proactive about your health by getting regular check-ups.”
- My entered prompt: “Okay, let’s say I start exercising daily and reduce my stress to low. What happens then?”
- Generative AI response: “Adjusting for daily exercise and lower stress levels, your new prediction is March 7, 2088, at 10:23 PM. That gives you almost seven extra years. See how small changes can make a big difference?”
Wow!
There you go, if I take care of myself (well, the fake me), I can add nearly seven years to my predicted death date.
A compelling case could be made that by the AI acting in this way, people are perhaps going to be impressed enough to start taking care of themselves. The aura of an AI oracle of sorts telling them that they can extend their lifespan might be a stronger impetus than a human clinician telling them so.
AI Takes Us Where We Choose To Go
Using AI as a means of predicting a death date is a dual-edged sword.
The upside is that AI serves as an educational tool to inform people about what factors impact their longevity. When devised and presented suitably, this could motivate people toward healthy practices. We would expect that AI should be shaped to provide transparency and offer clear-cut disclaimers. All in all, you might say that this is a beneficial capability.
The downside is that AI could seem convincing, even if disclaimers are proffered, and people would be misled into false indications of their death date. They in turn might react in ways that would be detrimental to them and others around them. Another concern is that the AI might go awry, such as encountering an AI hallucination (see my discussion at the link here), and present a date that is scarily near in time. If that happens, the person might instantly react, doing so before anyone else might reach them and aid in realizing the AI has gone astray.
Those downsides are pretty hefty. Distress. Potential motivator of harm. Possible exploitations of the individual. The list is lengthy.
Do the benefits outweigh the negatives?
The answer is that this is up to us all to decide. Society will collectively decide whether AI makers should allow or restrict these capabilities. Our lawmakers and regulators have an important role to play. For those who worry about artificial general intelligence (AGI) or artificial superintelligence (ASI) as an existential risk to humankind, many day-to-day aspects of AI have notably potent impacts on people too.
A final comment for now.
Mark Twain was famously attributed to having said this quote: “The reports of my death are greatly exaggerated.” There is a handy lesson there that applies to this circumstance.
You see, some people using AI-based death date-calculating apps are likely to say that they perceive the whole matter as a joke. It is something fun to do. They would smarmily contend that nobody would ever take these apps seriously. We are making a molehill into a mountain.
To be clear, such AI-powered death date reports can be taken seriously by some people and we need to give the matter serious consideration, accordingly. Don’t just handwave this away. It deserves sincere and devoted attention.
And that’s no exaggeration.