Alston Lin is the founder of OffDeal, an AI-native investment bank for small business M&A.

AI is eating the software industry. Virtually every new startup is an AI startup, and everyone else is strategizing on how to add AI to their product.

In the race to integrate AI into SaaS products, however, many builders are overlooking a critical usability flaw. My company learned this lesson the hard way—and it led us to completely rethink our business model.

How AI Is Generally Implemented Into SaaS

There are three major ways to leverage generative AI in SaaS:

1. Reading Data: Using AI to analyze existing natural language data.

2. Reading Instructions: Employing AI to understand user instructions for bespoke workflows.

3. Writing Data: Using AI to write data in English, such as personalized messages.

Most of the best use cases do all three, but reading instructions is the most powerful since it allows the same tool to handle an infinite number of workflows. However, it is also the most challenging to create a usable interface for.

At OffDeal, we initially built a SaaS product to discover and research small business acquisition targets. After users instructed what they wanted to know, our AI agent crawled the internet to parse relevant web pages and then wrote the answer. Our early adopters generally hired a team of interns to manually perform this task, so they were excited to automate it away with AI.

Then reality hit.

Most People Are Bad Prompt Engineers

Computer programming is the writing of code to unambiguously and precisely instruct a computer to perform a task, and it comes in various levels of abstraction. The higher the level, the more removed it is from the details of how the computer interprets the instructions.

Prompt engineering is essentially programming with an ultra-high-level language: English. While this lowers the barrier to entry enough for almost anyone to try it without prior experience and see some results, the vast majority of people are bad at it.

Consequently, users found it difficult to effectively instruct our AI agents. Why? Most people are accustomed to communicating instructions to colleagues with a wealth of industry-level, company-level and project-level context. Without this context, LLMs often misinterpret user instructions, and users find it challenging to write precise and unambiguous instructions for their desired outcomes.

We expected non-technical finance professionals to become expert prompt engineers overnight. This was an unrealistic assumption. Furthermore, when instructing a computer using a traditional programming language, if something doesn’t work, the default assumption is that the code given to the computer is flawed. However, with LLMs, it’s difficult to determine if incorrect results stem from:

• Limitations of AI

• The User’s Own Lack Of Prompt Engineering Skills

• Missing Context

This causes many users to believe the AI is incapable, when they’re simply prompting it incorrectly or failing to provide it necessary context.

In some cases, prompt engineering can be even more challenging than coding due to the non-deterministic nature of LLMs. Rewording a prompt, which to a human would appear to convey the same meaning, can cause an LLM to significantly alter the output for no apparent reason. Therefore, running a prompt at scale across thousands of cases requires substantial A/B testing to ensure consistency and correctness.

The Blank Screen Syndrome

During user interviews, we uncovered another problem: the “Blank Screen Syndrome,” also known as Writer’s Block. With most SaaS products, users rarely encounter a large text box expecting more than a few words. Our AI interface presented users with a daunting blank screen.

This open-ended input significantly increased the cognitive load, leading to decision paralysis. Users, faced with endless possibilities, often froze and gave up, or reverted to the simplest possible use of the tool. Good UX generally encourages presenting users with a small number of discrete actions, which minimizes the cognitive load required for them to use the tool and remain engaged.

By asking them to think and write a prompt, we drastically increased the cognitive load asked from them to gain value from our product, so much so that many give up and stop using it. This is also why offering a chat interface for an existing SaaS product is generally considered to be a gimmick.

One mechanism to mitigate this, which we used, is to offer presets for prompts for the most commonly desired use cases by customers. This reduces the cognitive load, since it takes less effort to review and change what you don’t like about a preset than to write a prompt from scratch.

Bridging The AI UX Gap

We eventually pivoted from the “do it yourself” SaaS model to a “do it for you” AI-enabled service business. Instead of asking clients to interact directly with our AI, we became intermediaries. Customers interfaced with a human, who then used their skill in prompt engineering to instruct the AI to complete tasks behind the scenes.

The integration of AI into business solutions is the future, but success will belong to those who can bridge the gap between AI’s potential and users’ ability to instruct it. Strategies to help include:

• Decreasing the cognitive load required to provide instructions through interface modifications, such as presets.

• For enterprise workflows, employ a forward deployed engineer to be responsible for setting up prompts for the AI to fit each customer’s workflows.

• Becoming an AI-enabled services business, where users interact with humans instead of the tool directly.

Generative AI has lowered the barrier to entry for building AI SaaS products. The challenge now lies in creating interfaces usable by your customers.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Share.

Leave A Reply

Exit mobile version