Sean Merat is the CEO of Owl.co.

The insurance industry in recent years—particularly in sectors like disability, workers’ compensation, and bodily injury—has drifted away from personal connections, becoming more transactional and less human. Research shows that the level of empathy insurance-claims teams extend to customers has the biggest impact on customer satisfaction, but the claims-settlement process often lacks empathy as claimants find it arduous, impersonal and confusing, leading to a loss of trust.

McKinsey & Co. used the words “seismic impact” to refer to the potential of artificial intelligence (AI) in the insurance industry, but what will this impact be on claims adjusters and, importantly, claimants themselves, those who need the most help, whose humanity is most at stake?

Human claims professionals are of vital importance to the claims process. But the status quo isn’t human enough, with workloads and inefficiencies perpetuating biases and unfair decisions towards claimants. In response, I believe we’ll see AI play a much greater role in insurance moving forward. But I want to be clear: the goal is never to remove humans. Rather, the goal is to bring humanity back to insurance.

The Human Cost Of Overwhelming Workloads

It’s no secret that claims adjusters at large insurance carriers are overwhelmed, with some handling up to 150 claims per month. Each claim can involve extensive documentation—sometimes totaling hundreds or even thousands of pages—making it challenging for adjusters to dedicate sufficient time to each claimant. Such heavy workloads leave little room for meaningful engagement, reducing human interactions to mere data-processing tasks.

A process with inevitable inefficiencies and oversights that relegates claims adjusters to work like robots is not a viable path forward. I’ve seen adjusters spend 40–70% of their days on data-intensive tasks. At best, they spend too much time sifting through documents. At worst? Corners get cut or biases start to guide decisions.

While this can carry tons of doom and gloom, major transformations are already occurring with the use of artificial intelligence. AI can rapidly analyze millions of pages of complex claims records, provide easy-to-consume summaries to help adjusters make correct decisions quickly, highlight recommendations to claimants and detect potentially fraudulent claims to decrease costs and reduce claimants’ wait times.

AI is simply here to help humans do what they do best: provide empathetic service to claimants who are often going through difficult times.

Why Hiring More Adjusters Isn’t the Answer

It might seem that hiring more adjusters would solve the problem, but a labor shortage doesn’t make that solution feasible as the number of claims adjusters is shrinking. Forecasts show the sector will lose 400,000 jobs by 2026 while the volume of insurance claims continues to rise.

Even if more staff were available, large teams beget new challenges. More employees are harder to manage and biases from workloads grow as teams scale. This is known as the team-scaling fallacy.

One of AI’s advantages is its ability to reduce human bias in decision-making. When adjusters have to evaluate thousands of pages for a claim with complex medical and legal records, policy documents, claims note and much more (personal experiences, belief systems, or even something as simple as a bad day) can adversely influence a claims adjuster’s judgments.

This increases the risk of inconsistency and bias in processing claims, leading to unfair or incorrect decisions and fraudulent claims slipping through the cracks.

By contrast, well-designed AI models leverage objective data to assist adjusters, empowering them to make better decisions. I’ve seen firsthand how this leads to fairer, faster and more consistent outcomes for claimants.

Challenges of AI Bias and Predictive Models

Despite all the praise about AI, what happens if something goes wrong? The industry is right to ask questions and apply due diligence; if AI is trained on biased data, it will perpetuate those biases in its recommendations.

From my perspective, the solution for avoiding unintended bias from AI is to use AI for deterministic tasks instead of AI that employs predictive models—in other words, use AI that focuses on analyzing specific, case-by-case facts.

Deterministic models function based on predefined logic, such as rules that instruct AI to only analyze insurance claims independent of other claims, whereas predictive models rely on patterns from historical data to make recommendations.

Therefore, predictive models carry a higher risk of bias, especially in complex claims (where the predefined logic falls apart) like in cases involving human injuries. While predictive modeling may have its applications, any system that assists adjusters must focus on the facts of an individual claim.

AI’s feasibility is already evident in areas like auto and property insurance. For example, Deloitte Luxembourg and Tractable validated AI’s accuracy and effectiveness for vehicle damages and home damage, respectively, often without the need for adjuster oversight.

But the stakes are much higher with claims involving human health. Human injuries don’t一and shouldn’t一fit into neatly predefined categories the way damaged cars or homes might. These cases demand careful evaluation on their own merits, so insurance carriers that adopt responsibly designed AI to assess each claim uniquely with deterministic models will better position themselves to empower employees and optimize the claims process all while reducing bias to improve outcomes for claimants and extending more empathy to customers.

A Future Built on Collaboration Between AI and Adjusters

I envision collaboration between AI and human adjusters as the future of claims-adjusting in insurance. AI isn’t here to replace adjusters but to manage their most repetitive tasks so they can spend more time helping claimants. The result is a system where adjusters can devote their attention to making informed decisions.

By reviewing claims faster and more accurately, AI helps insurers process claims in a fraction of the time, giving claimants the quick resolution they expect. By managing data-heavy tasks, AI frees adjusters to focus on human connections and tend to each claim with empathy—something AI can’t replicate.

I believe the insurance industry stands at the precipice of a major transformation. AI is not just a tool for reducing workloads. AI is the best way to improve the quality of care claimants receive.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Share.

Leave A Reply

Exit mobile version